• Title/Summary/Keyword: Smart Distribution

Search Result 1,065, Processing Time 0.022 seconds

A Study on the Choice of Export Payment Types by Applying the Characteristics of the New Trade & Logistics Environment (신(新)무역물류환경의 특성을 적용한 수출대금 결제유형 선택연구)

  • Chang-bong Kim;Dong-jun Lee
    • Korea Trade Review
    • /
    • v.48 no.4
    • /
    • pp.303-320
    • /
    • 2023
  • Recently, import and export companies have been using T/T remittance and Surrender B/L more frequently than L/C when selecting the process and method of trade payment settlement. The new trade and logistics environment is thriving in the era of the Fourth Industrial Revolution (4IR). Document-based trade transactions are undergoing a digitalization as bills of lading or smart contracts are being developed. The purpose of this study is to verify whether exporters choose export payment types based on negotiating factors. In addition, we would like to discuss the application of the characteristics of the new trade and logistics environment. Data for analysis was collected through surveys. The collection method consisted of direct visits to the company, e-mail, fax, and online surveys. The survey distribution period is from February 1, 2023, to April 30, 2023. The questionnaire was distributed in 2,000 copies, and 447 copies were collected. The final 336 copies were used for analysis, excluding 111 copies that were deemed inappropriate for the purpose of this study. The results of the study are shown below. First, among the negotiating factors, the product differentiation of exporters did not significantly affect the selection of export payment types. Second, among the negotiating factors, the greater the purchasing advantage recognized by exporters, the higher the possibility of using the post-transfer method. In addition to analyzing the results, this study suggests that exporters should consider adopting new payment methods, such as blockchain technology-based bills of lading and trade finance platforms, to adapt to the characteristics of the evolving trade and logistics environment. Therefore, exporters should continue to show interest in initiatives aimed at digitizing trade documents as a response to the challenges posed by bills of lading. In future studies, it is necessary to address the lack of social awareness in Korea by conducting advanced research abroad.

Effect of Service Convenience on the Relationship Performance in B2B Markets: Mediating Effect of Relationship Factors (B2B 시장에서의 서비스 편의성이 관계성과에 미치는 영향 : 관계적 요인의 매개효과 분석)

  • Han, Sang-Lin;Lee, Seong-Ho
    • Journal of Distribution Research
    • /
    • v.16 no.4
    • /
    • pp.65-93
    • /
    • 2011
  • As relationship between buyer and seller has been brought closer and long-term relationship has been more important in B2B markets, the importance of service and service convenience increases as well as product. In homogeneous markets, where service offerings are similar and therefore not key competitive differentiator, providing greater convenience may enable a competitive advantage. Service convenience, as conceptualized by Berry et al. (2002), is defined as the consumers' time and effort perceptions related to buying or using a service. For this reason, B2B customers are interested in how fast the service is provided and how much save non-monetary cost like time or effort by the service convenience along with service quality. Therefore, this study attempts to investigate the impact of service convenience on relationship factors such as relationship satisfaction, relationship commitment, and relationship performance. The purpose of this study is to find out whether service convenience can be a new antecedent of relationship quality and relationship performance. In addition, this study tries to examine how five-dimensional service convenience constructs (decision convenience, access convenience, transaction convenience, benefit convenience, post-benefit convenience) affect customers' relationship satisfaction, relationship commitment, and relationship performance. The service convenience comprises five fundamental components - decision convenience (the perceived time and effort costs associated with service purchase or use decisions), access convenience(the perceived time and effort costs associated with initiating service delivery), transaction convenience(the perceived time and effort costs associated with finalizing the transaction), benefit convenience(the perceived time and effort costs associated with experiencing the core benefits of the offering) and post-benefit convenience (the perceived time and effort costs associated with reestablishing subsequent contact with the firm). Earlier studies of perceived service convenience in the industrial market are none. The conventional studies that have dealt with service convenience have usually been made in the consumer market, or they have dealt with convenience aspects in the service process. This service convenience measure for consumer market can be useful tool to estimate service quality in B2B market. The conceptualization developed by Berry et al. (2002) reflects a multistage, experiential consumption process in which evaluations of convenience vary at each stage. For this reason, the service convenience measure is good for B2B service environment which has complex processes and various types. Especially when categorizing B2B service as sequential stage of service delivery like Kumar and Kumar (2004), the Berry's service convenience measure which reflect sequential flow of service deliveries suitable to establish B2B service convenience. For this study, data were gathered from respondents who often buy business service and analyzed by structural equation modeling. The sample size in the present study is 119. Composite reliability values and average variance extracted values were examined for each variable to have reliability. We determine whether the measurement model supports the convergent validity by CFA, and discriminant validity was assessed by examining the correlation matrix of the constructs. For each pair of constructs, the square root of the average variance extracted exceeded their correlations, thus supporting the discriminant validity of the constructs. Hypotheses were tested using the Smart PLS 2.0 and we calculated the PLS path values and followed with a bootstrap re-sampling method to test the hypotheses. Among the five dimensional service convenience constructs, four constructs (decision convenience, transaction convenience, benefit convenience, post-benefit convenience) affected customers' positive relationship satisfaction, relationship commitment, and relationship performance. This result means that service convenience is important cue to improve relationship between buyer and seller. One of the five service convenience dimensions, access convenience, does not affect relationship quality and performance, which implies that the dimension of service convenience is not important factor of cumulative satisfaction. The Cumulative satisfaction can be distinguished from transaction-specific customer satisfaction, which is an immediate post-purchase evaluative judgment or an affective reaction to the most recent transactional experience with the firm. Because access convenience minimizes the physical effort associated with initiating an exchange, the effect on relationship satisfaction similar to cumulative satisfaction may be relatively low in terms of importance than transaction-specific customer satisfaction. Also, B2B firms focus on service quality, price, benefit, follow-up service and so on than convenience of time or place in service because it is relatively difficult to change existing transaction partners in B2B market compared to consumer market. In addition, this study using partial least squares methods reveals that customers' satisfaction and commitment toward relationship has mediating role between the service convenience and relationship performance. The result shows that management and investment to improve service convenience make customers' positive relationship satisfaction, and then the positive relationship satisfaction can enhance the relationship commitment and relationship performance. And to conclude, service convenience management is an important part of successful relationship performance management, and the service convenience is an important antecedent of relationship between buyer and seller such as the relationship commitment and relationship performance. Therefore, it has more important to improve relationship performance that service providers enhance service convenience although competitive service development or service quality improvement is important. Given the pressure to provide increased convenience, it is not surprising that organizations have made significant investments in enhancing the convenience aspect of their product and service offering.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.