• Title/Summary/Keyword: Scale-model

Search Result 8,303, Processing Time 0.062 seconds

Evaluation of Applicability of Sea Ice Monitoring Using Random Forest Model Based on GOCI-II Images: A Study of Liaodong Bay 2021-2022 (GOCI-II 영상 기반 Random Forest 모델을 이용한 해빙 모니터링 적용 가능성 평가: 2021-2022년 랴오둥만을 대상으로)

  • Jinyeong Kim;Soyeong Jang;Jaeyeop Kwon;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1651-1669
    • /
    • 2023
  • Sea ice currently covers approximately 7% of the world's ocean area, primarily concentrated in polar and high-altitude regions, subject to seasonal and annual variations. It is very important to analyze the area and type classification of sea ice through time series monitoring because sea ice is formed in various types on a large spatial scale, and oil and gas exploration and other marine activities are rapidly increasing. Currently, research on the type and area of sea ice is being conducted based on high-resolution satellite images and field measurement data, but there is a limit to sea ice monitoring by acquiring field measurement data. High-resolution optical satellite images can visually detect and identify types of sea ice in a wide range and can compensate for gaps in sea ice monitoring using Geostationary Ocean Color Imager-II (GOCI-II), an ocean satellite with short time resolution. This study tried to find out the possibility of utilizing sea ice monitoring by training a rule-based machine learning model based on learning data produced using high-resolution optical satellite images and performing detection on GOCI-II images. Learning materials were extracted from Liaodong Bay in the Bohai Sea from 2021 to 2022, and a Random Forest (RF) model using GOCI-II was constructed to compare qualitative and quantitative with sea ice areas obtained from existing normalized difference snow index (NDSI) based and high-resolution satellite images. Unlike NDSI index-based results, which underestimated the sea ice area, this study detected relatively detailed sea ice areas and confirmed that sea ice can be classified by type, enabling sea ice monitoring. If the accuracy of the detection model is improved through the construction of continuous learning materials and influencing factors on sea ice formation in the future, it is expected that it can be used in the field of sea ice monitoring in high-altitude ocean areas.

Convergence of Remote Sensing and Digital Geospatial Information for Monitoring Unmeasured Reservoirs (미계측 저수지 수체 모니터링을 위한 원격탐사 및 디지털 공간정보 융합)

  • Hee-Jin Lee;Chanyang Sur;Jeongho Cho;Won-Ho Nam
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1135-1144
    • /
    • 2023
  • Many agricultural reservoirs in South Korea, constructed before 1970, have become aging facilities. The majority of small-scale reservoirs lack measurement systems to ascertain basic specifications and water levels, classifying them as unmeasured reservoirs. Furthermore, continuous sedimentation within the reservoirs and industrial development-induced water quality deterioration lead to reduced water supply capacity and changes in reservoir morphology. This study utilized Light Detection And Ranging (LiDAR) sensors, which provide elevation information and allow for the characterization of surface features, to construct high-resolution Digital Surface Model (DSM) and Digital Elevation Model (DEM) data of reservoir facilities. Additionally, bathymetric measurements based on multibeam echosounders were conducted to propose an updated approach for determining reservoir capacity. Drone-based LiDAR was employed to generate DSM and DEM data with a spatial resolution of 50 cm, enabling the display of elevations of hydraulic structures, such as embankments, spillways, and intake channels. Furthermore, using drone-based hyperspectral imagery, Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI) were calculated to detect water bodies and verify differences from existing reservoir boundaries. The constructed high-resolution DEM data were integrated with bathymetric measurements to create underwater contour maps, which were used to generate a Triangulated Irregular Network (TIN). The TIN was utilized to calculate the inundation area and volume of the reservoir, yielding results highly consistent with basic specifications. Considering areas that were not surveyed due to underwater vegetation, it is anticipated that this data will be valuable for future updates of reservoir capacity information.

Test of Independence Between Variables to Estimate the Frequency of Damage in Heat Pipe (열수송관 파손빈도 추정을 위한 변수간 독립성 검정)

  • Myeongsik Kong;Jaemo Kang;Sungyeol Lee
    • Journal of the Korean GEO-environmental Society
    • /
    • v.24 no.12
    • /
    • pp.61-67
    • /
    • 2023
  • Heat pipes located underground in urban areas and operated under high temperature and pressure conditions can cause large-scale human and economic damage if damaged. In order to predict damage in advance, damage and construction information of heat pipe are analyzed to derive independent variables that have a correlation with frequency of damage, and a simple regression analysis modified model using each variable is applied to the field. However, as the correlation between independent variables applied to the model increases, the independence between variables is harmed and the reliability of the model decreases. In this study, the independence of the pipe diameter, burial depth, insulation level of monitoring system, and disconnection or short circuit of the detection line, which are judged to be interrelated, was tested to derive a method for combining variables and setting categories necessary to apply to the frequency of damage estimation model. For the test of independence, the continuous variables pipe diameter and burial depth were each converted into three categories, insulation level of monitoring system was converted into two categories, and the categorical variable disconnection or short circuit of the detection line status was kept as two categories. As a result of the test of independence, p-value between pipe diameter and burial depth, level of monitoring system and disconnection or short circuit of the detection line was lower than the significance level (α = 0.05), indicating a large correlation between them. Therefore, the pipe diameter and burial depth were combined into one variable, and the categories of the combined variable were set to 9 considering the previously set categories. The insulation level of monitoring system and the disconnection or short circuit of the detection line were also combined into one variable. Since the insulation level is unreliable when the detection line status is disconnection or short circuit, the categories of the combined variable were set to 3.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

An Empirical Study on Motivation Factors and Reward Structure for User's Createve Contents Generation: Focusing on the Mediating Effect of Commitment (창의적인 UCC 제작에 영향을 미치는 동기 및 보상 체계에 대한 연구: 몰입에 매개 효과를 중심으로)

  • Kim, Jin-Woo;Yang, Seung-Hwa;Lim, Seong-Taek;Lee, In-Seong
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.141-170
    • /
    • 2010
  • User created content (UCC) is created and shared by common users on line. From the user's perspective, the increase of UCCs has led to an expansion of alternative means of communications, while from the business perspective UCCs have formed an environment in which an abundant amount of new contents can be produced. Despite outward quantitative growth, however, many aspects of UCCs do not meet the expectations of general users in terms of quality, and this can be observed through pirated contents and user-copied contents. The purpose of this research is to investigate effective methods for fostering production of creative user-generated content. This study proposes two core elements, namely, reward and motivation, which are believed to enhance content creativity as well as the mediating factor and users' committement, which will be effective for bridging the increasing motivation and content creativity. Based on this perspective, this research takes an in-depth look at issues related to constructing the dimensions of reward and motivation in UCC services for creative content product, which are identified in three phases. First, three dimensions of rewards have been proposed: task dimension, social dimension, and organizational dimention. The task dimension rewards are related to the inherent characteristics of a task such as writing blog articles and pasting photos. Four concrete ways of providing task-related rewards in UCC environments are suggested in this study, which include skill variety, task significance, task identity, and autonomy. The social dimensioni rewards are related to the connected relationships among users. The organizational dimension consists of monetary payoff and recognition from others. Second, the two types of motivations are suggested to be affected by the diverse rewards schemes: intrinsic motivation and extrinsic motivation. Intrinsic motivation occurs when people create new UCC contents for its' own sake, whereas extrinsic motivation occurs when people create new contents for other purposes such as fame and money. Third, commitments are suggested to work as important mediating variables between motivation and content creativity. We believe commitments are especially important in online environments because they have been found to exert stronger impacts on the Internet users than other relevant factors do. Two types of commitments are suggested in this study: emotional commitment and continuity commitment. Finally, content creativity is proposed as the final dependent variable in this study. We provide a systematic method to measure the creativity of UCC content based on the prior studies in creativity measurement. The method includes expert evaluation of blog pages posted by the Internet users. In order to test the theoretical model of our study, 133 active blog users were recruited to participate in a group discussion as well as a survey. They were asked to fill out a questionnaire on their commitment, motivation and rewards of creating UCC contents. At the same time, their creativity was measured by independent experts using Torrance Tests of Creative Thinking. Finally, two independent users visited the study participants' blog pages and evaluated their content creativity using the Creative Products Semantic Scale. All the data were compiled and analyzed through structural equation modeling. We first conducted a confirmatory factor analysis to validate the measurement model of our research. It was found that measures used in our study satisfied the requirement of reliability, convergent validity as well as discriminant validity. Given the fact that our measurement model is valid and reliable, we proceeded to conduct a structural model analysis. The results indicated that all the variables in our model had higher than necessary explanatory powers in terms of R-square values. The study results identified several important reward shemes. First of all, skill variety, task importance, task identity, and automony were all found to have significant influences on the intrinsic motivation of creating UCC contents. Also, the relationship with other users was found to have strong influences upon both intrinsic and extrinsic motivation. Finally, the opportunity to get recognition for their UCC work was found to have a significant impact on the extrinsic motivation of UCC users. However, different from our expectation, monetary compensation was found not to have a significant impact on the extrinsic motivation. It was also found that commitment was an important mediating factor in UCC environment between motivation and content creativity. A more fully mediating model was found to have the highest explanation power compared to no-mediation or partially mediated models. This paper ends with implications of the study results. First, from the theoretical perspective this study proposes and empirically validates the commitment as an important mediating factor between motivation and content creativity. This result reflects the characteristics of online environment in which the UCC creation activities occur voluntarily. Second, from the practical perspective this study proposes several concrete reward factors that are germane to the UCC environment, and their effectiveness to the content creativity is estimated. In addition to the quantitive results of relative importance of the reward factrs, this study also proposes concrete ways to provide the rewards in the UCC environment based on the FGI data that are collected after our participants finish asnwering survey questions. Finally, from the methodological perspective, this study suggests and implements a way to measure the UCC content creativity independently from the content generators' creativity, which can be used later by future research on UCC creativity. In sum, this study proposes and validates important reward features and their relations to the motivation, commitment, and the content creativity in UCC environment, which is believed to be one of the most important factors for the success of UCC and Web 2.0. As such, this study can provide significant theoretical as well as practical bases for fostering creativity in UCC contents.

Self-Regulatory Mode Effects on Emotion and Customer's Response in Failed Services - Focusing on the moderate effect of attribution processing - (고객의 자기조절성향이 서비스 실패에 따른 부정적 감정과 고객반응에 미치는 영향 - 귀인과정에 따른 조정적 역할을 중심으로 -)

  • Sung, Hyung-Suk;Han, Sang-Lin
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.83-110
    • /
    • 2010
  • Dissatisfied customers may express their dissatisfaction behaviorally. These behavioral responses may impact the firms' profitability. How do we model the impact of self regulatory orientation on emotions and subsequent customer behaviors? Obviously, the positive and negative emotions experienced in these situations will influence the overall degree of satisfaction or dissatisfaction with the service(Zeelenberg and Pieters 1999). Most likely, these specific emotions will also partly determine the subsequent behavior in relation to the service and service provider, such as the likelihood of complaining, the degree to which customers will switch or repurchase, and the extent of word of mouth communication they will engage in(Zeelenberg and Pieters 2004). This study investigates the antecedents, consequences of negative consumption emotion and the moderate effect of attribution processing in an integrated model(self regulatory mode → specific emotions → behavioral responses). We focused on the fact that regret and disappointment have effects on consumer behavior. Especially, There are essentially two approaches in this research: the valence based approach and the specific emotions approach. The authors indicate theoretically and show empirically that it matters to distinguish these approaches in services research. and The present studies examined the influence of two regulatory mode concerns(Locomotion orientation and Assessment orientation) with making comparisons on experiencing post decisional regret and disappointment(Pierro, Kruglanski, and Higgins 2006; Pierro et al. 2008). When contemplating a decision with a negative outcome, it was predicted that high (vs low) locomotion would induce more disappointment than regret, whereas high (vs low) assessment would induce more regret than disappointment. The validity of the measurement scales was also confirmed by evaluations provided by the participating respondents and an independent advisory panel; samples provided recommendations throughout the primary, exploratory phases of the study. The resulting goodness of fit statistics were RMR or RMSEA of 0.05, GFI and AGFI greater than 0.9, and a chi-square with a 175.11. The indicators of the each constructs were very good measures of variables and had high convergent validity as evidenced by the reliability with a more than 0.9. Some items were deleted leaving those that reflected the cognitive dimension of importance rather than the dimension. The indicators were very good measures and had convergent validity as evidenced by the reliability of 0.9. These results for all constructs indicate the measurement fits the sample data well and is adequate for use. The scale for each factor was set by fixing the factor loading to one of its indicator variables and then applying the maximum likelihood estimation method. The results of the analysis showed that directions of the effects in the model are ultimately supported by the theory underpinning the causal linkages of the model. This research proposed 6 hypotheses on 6 latent variables and tested through structural equation modeling. 6 alternative measurements were compared through statistical significance test of the paths of research model and the overall fitting level of structural equation model and the result was successful. Also, Locomotion orientation more positively influences disappointment when internal attribution is high than low and Assessment orientation more positively influences regret when external attribution is high than low. In sum, The results of our studies suggest that assessment and locomotion concerns, both as chronic individual predispositions and as situationally induced states, influence the amount of people's experienced regret and disappointment. These findings contribute to our understanding of regulatory mode, regret, and disappointment. In previous studies of regulatory mode, relatively little attention has been paid to the post actional evaluative phase of self regulation. The present findings indicate that assessment concerns and locomotion concerns are clearly distinct in this phase, with individuals higher in assessment delving more into possible alternatives to past actions and individuals higher in locomotion engaging less in such reflective thought. What this suggests is that, separate from decreasing the amount of counterfactual thinking per se, individuals with locomotion concerns want to move on, to get on with it. Regret is about the past and not the future. Thus, individuals with locomotion concerns are less likely to experience regret. The results supported our predictions. We discuss the implications of these findings for the nature of regret and disappointment from the perspective of their relation to regulatory mode. Also, self regulatory mode and the specific emotions(disappointment and regret) were assessed and their influence on customers' behavioral responses(inaction, word of mouth) was examined, using a sample of 275 customers. It was found that emotions have a direct impact on behavior over and above the effects of negative emotions and customer behavior. Hence, We argue against incorporating emotions such as regret and disappointment into a specific response measure and in favor of a specific emotions approach on self regulation. Implications for services marketing practice and theory are discussed.

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Extension Method of Association Rules Using Social Network Analysis (사회연결망 분석을 활용한 연관규칙 확장기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.111-126
    • /
    • 2017
  • Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.

The Usefulness of Product Display of Online Store by the Product Type of Usage Situation - Focusing on the moderate effect of the product portability - (사용상황별 제품유형에 따른 온라인 점포 제품디스플레이의 유용성 - 제품 휴대성의 조절효과를 중심으로 -)

  • Lee, Dong-Il;Choi, Seung-Hoon
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.1-24
    • /
    • 2011
  • 1. Introduction: Contrast to the offline purchasing environment, online store cannot offer the sense of touch or direct visual information of its product to the consumers. So the builder of the online shopping mall should provide more concrete and detailed product information(Kim 2008), and Alba (1997) also predicted that the quality of the offered information is determined by the post-purchase consumer satisfaction. In practice, many fashion and apparel online shopping malls offer the picture information with the product on the real person model to enhance the usefulness of product information. On the other virtual product experience has been suggested to the ways of overcoming the online consumers' limited perceptual capability (Jiang & Benbasat 2005). However, the adoption and the facilitation of the virtual reality tools requires high investment and technical specialty compared to the text/picture product information offerings (Shaffer 2006). This could make the entry barrier to the online shopping to the small retailers and sometimes it could be demanding high level of consumers' perceptual efforts. So the expensive technological solution could affects negatively to the consumer decision making processes. Nevertheless, most of the previous research on the online product information provision suggests the VR be the more effective tools. 2. Research Model and Hypothesis: Presented in

    , research model suggests VR effect could be moderated by the product types by the usage situations. Product types could be defined as the portable product and installed product, and the information offering type as still picture of the product, picture of the product with the real-person model and VR. 3. Methods and Results: 3.1. Experimental design and measured variables We designed the 2(product types) X 3(product information types) experimental setting and measured dependent variables such as information usefulness, attitude toward the shopping mall, overall product quality, purchase intention and the revisiting intention. In the case of information usefulness and attitude toward the shopping mall were measured by multi-item scale. As a result of reliability test, Cronbach's Alpha value of each variable shows more than 0.6. Thus, we ensured that the internal consistency of items. 3.2. Manipulation check The main concern of this study is to verify the moderate effect by the product type of usage situation. indicates that our experimental manipulation of the moderate effect of the product type was successful. 3.3. Results As
    indicates, there was a significant main effect on the only one dependent variable(attitude toward the shopping mall) by the information types. As predicted, VR has highest mean value compared to other information types. Thus, H1 was partially supported. However, main effect by the product types was not found. To evaluate H2 and H3, a two-way ANOVA was conducted. As
    indicates, there exist the interaction effects on the three dependent variables(information usefulness, overall product quality and purchase intention) by the information types and the product types. As predicted, picture of the product with the real-person model has highest mean among the information types in the case of portable product. On the other hand, VR has highest mean among the information types in the case of installed product. Thus, H2 and H3 was supported. 4. Implications: The present study found the moderate effect by the product type of usage situation. Based on the findings the following managerial implications are asserted. First, it was found that information types are affect only the attitude toward the shopping mall. The meaning of this finding is that VR effects are not enough to understand the product itself. Therefore, we must consider when and how to use this VR tools. Second, it was found that there exist the interaction effects on the information usefulness, overall product quality and purchase intention. This finding suggests that consideration of usage situation helps consumer's understanding of product and promotes their purchase intention. In conclusion, not only product attributes but also product usage situations must be fully considered by the online retailers when they want to meet the needs of consumers.

  • PDF
  • Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

    • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
      • Journal of Intelligence and Information Systems
      • /
      • v.23 no.2
      • /
      • pp.1-17
      • /
      • 2017
    • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.