• Title/Summary/Keyword: E-Paper

Search Result 16,149, Processing Time 0.049 seconds

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

Dual Path Model in Store Loyalty of Discount Store (대형마트 충성도의 이중경로모형)

  • Ji, Seong-Goo;Lee, Ihn-Goo
    • Journal of Distribution Research
    • /
    • v.15 no.1
    • /
    • pp.1-24
    • /
    • 2010
  • I. Introduction The industry of domestic discount store was reorganized with 2 bigs and 1 middle, and then Home Plus took over Home Ever in 2008. In present, Oct, 2008, E-Mart has 118 outlets, Home Plus 112 outlets, and Lotte Mart 60 stores. With total number of 403 outlets, they are getting closer to a saturation point. We know that the industry of discount store has been getting through the mature stage in retail life cycle. There are many efforts to maintain existing customers rather than to get new customers. These competitions in this industry lead firms to acknowledge 'store loyalty' to be the first strategic tool for their sustainable competitiveness. In other words, the strategic goal of discount store is to boost up the repurchase rate of customers throughout increasing store loyalty. If owners of retail shops can figure out main factors for store loyalty, they can easily make more efficient and effective retail strategies which bring about more sales and profits. In this practical sense, there are many papers which are focusing on the antecedents of store loyalty. Many researchers have been inspecting causal relationships between antecedents and store loyalty; store characteristics, store image, atmosphere in store, sales promotion in store, service quality, customer characteristics, crowding, switching cost, trust, satisfaction, commitment, etc., In recent times, many academic researchers and practitioners have been interested in 'dual path model for service loyalty'. There are two paths in store loyalty. First path has an emphasis on symbolic and emotional dimension of service brand, and second path focuses on quality of product and service. We will call the former an extrinsic path and call the latter an intrinsic path. This means that consumers' cognitive path for store loyalty is not single but dual. Existing studies for dual path model are as follows; First, in extrinsic path, some papers in domestic settings show that there is 'store personality-identification-loyalty' path. Second, service quality has an effect on loyalty, which is a behavioral variable, in the mediation of customer satisfaction. But, it's very difficult to find out an empirical paper applied to domestic discount store based on this mediating model. The domestic research for store loyalty concentrates on not only intrinsic path but also extrinsic path. Relatively, an attention for intrinsic path is scarce. And then, we acknowledge that there should be a need for integrating extrinsic and intrinsic path. Also, in terms of retail industry, this study is meaningful because retailers want to achieve their competitiveness by using store loyalty. And so, the purpose of this paper is to integrate and complement two existing paths into one specific model, dual path model. This model includes both intrinsic and extrinsic path for store loyalty. With this research, we would expect to understand the full process of forming customers' store loyalty which had not been clearly explained. In other words, we propose the dual path model for discount store loyalty which has been originated from store personality and service quality. This model is composed of extrinsic path, discount store personality$\rightarrow$store identification$\rightarrow$store loyalty, and intrinsic path, service quality of discount store$\rightarrow$customer satisfaction$\rightarrow$store loyalty. II. Research Model Dual path model integrates intrinsic path and extrinsic path into one specific model. Intrinsic path put an emphasis on quality characteristics and extrinsic path focuses on brand characteristics. Intrinsic path is based on information processing perspective, and extrinsic path emphasizes symbolic and emotional dimension of brand. This model is composed of extrinsic path, discount store personality$\rightarrow$store identification$\rightarrow$store loyalty, and intrinsic path, service quality of discount store$\rightarrow$customer satisfaction$\rightarrow$store loyalty. Hypotheses are as follows; Hypothesis 1: Service quality perceived by customers in discount store has an positive effect on customer satisfaction Hypothesis 2: Store personality perceived by customers in discount store has an positive effect on store identification Hypothesis 3: Customer satisfaction in discount store has an positive effect on store loyalty. Hypothesis 4: Store identification has an positive effect on store loyalty. III. Results and Implications We examined consumers who patronize discount stores for samples of this study. With the structural equation model(SEM) analysis, we empirically tested the validity and fitness of the dual path model for store loyalty in discount stores. As results, the fitness indices of this model were well fitted to data obtained. In an intrinsic path, service quality(SQ) is positively related to customer satisfaction(CS), customer satisfaction(CS) has very significantly positive effect on store loyalty(SL). Also, in an extrinsic path, the store personality(SP) is positively related to store identification(SI), it shows significant effect on store loyalty. Table 1 shows the results as follows; There are some theoretical and practical implications. First, Many studies on discount store loyalty have been executed from various perspectives. But there has been no integrative view on this issue. And so, this research was theoretically designed to integrate various and controversial arguments into one systematic model. We empirically tested dual path model forming store loyalty, and brought up a systematic and integrative framework for future studies. We want to expect creative and aggressive research activities. Second, a few established papers are focused on the relationship between antecedents and store loyalty; store characteristics, atmosphere, sales promotion in store, service quality, trust, commitment, etc., There has been some limits in understanding thoroughly the formation process of store loyalty with a singular path, intrinsic or extrinsic. Beyond these limits in single path, we could propose the new path for store loyalty. This is meaningful. Third, discount store firms make and execute marketing strategies for increasing store loyalty. This research provides real practitioners with reference framework needed for actual strategy formation. Because this paper shows integrated and systematic path for store loyalty. A special feature of this study is to represent 6 sub dimensions of service quality in intrinsic path and 4 sub dimensions of store personality in extrinsic path. Marketers can make more analytic marketing planning with concrete sub dimensions of service quality and store personality. When marketers of discount stores make strategic planning like MPR, Ads, campaign, sales promotion, they can use many items which are more competitive than competitors.

  • PDF

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Survival Value of Myocutaneous Flaps in the Management of Epidermoid Carcioma of the Oral Cavity (구강내 상피암의 치료에서 근피부판이 생존율에 미치는 영향)

  • Seel David John;Park Chul-Young;Yoo Chung-Joon;Lee Samuel;Park Yoon-Kyu
    • Korean Journal of Head & Neck Oncology
    • /
    • v.6 no.2
    • /
    • pp.79-84
    • /
    • 1990
  • This paper is a review of our experience with radical resection for cancer of the oral cavity with particular emphasis upon the value of myocutaneous(i.e., musculocutanous) flaps employed in the surgical reconstruction in patient survival. During the past 15 years, 98 patients underwent resection of cancer arising in the oral cavity and oropharynx. Of these, 14 had composite resections in which the mandible was not sectioned, and 4 underwent en bloc resections without neck dissections in the face of post-radiation recurrence. When these excluded, 84 patients who underwent COMMANDO procedures with or without myocutaneous flaps were suitable for analysis of recurrence and survival according to the various surgical technics employed. 1) According to the surgical technic, there were 24 standard COMMANDO procedures in whom no regional or myocutanous flap was used; 12 patients who underwent reconstruction employing a forehead flap; 19 patients in whom a posterior cervical 'nape' flap was employed; 27 patients who underwent myocutaneous or osteo-myocutaneous flap repair; and two patients who had double flap repair. 2) The uncorrected two-year disease free survival was 41% for standard COMMANDOs, 17% for forehead flap COMMANDOs; 35% for nape flap COMMANDOs; and 35% for myocutaneous flap COMMANDO procedures. 3) The two-year disease-free survival by Stage was 100% in Stage I, 45% in Stage II, 41% in Stage III, and 18% in Stage IV. 4) When myocutanous flaps cases were compared with Group I, comprised of matched historical controls including both Standard COMMANDOs and those who had undergone regional flap repairs(that is, forehead and nape flap COMMANDOs)there was no difference, both groups showing a 40% 2-year disease-free survival. 5) When musculocutanous flap cases were compared with Goup II, which was composed of matched historical controis limited to patients who had undergone regional flap repairs(that is, forehead and nape flap cases only)there was no difference, both groups showing a 27% 2-year desease-free survival. 6) When musculocutanous flap cases were compared with Group III, composed of patients who had undergone classic COMMANDO procedures without any sort of flap repair, there was a striking difference; the patients undergoing MC flap repair showed 50% 2-year disease-free survival, whereas the classic COMMANDO cases showed a 25% survival free of disease. 7) Locoregional recurrence was also evaluated in the four categories; for standard COMMANDO cases it was 25%, for nape flap cases 26% ; for forehead flap cases, 33%, and for the musculocutaneous flap cases, the lowest recurrence rate, 22%. These results are of particular significance in view of the fact that the proportion of advanced cases(Stage III and IV)in each category was 67% of standard cases, 79% of nape flap patients, 100% of forehead flap cases, and 96% of musculocutaneous flap cases.

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

Geological Structures of the Hadong Northern Anorthosite Complex and its surrounding Area in the Jirisan Province, Yeongnam Massif, Korea (영남육괴 지리산지구에서 하동 북부 회장암복합체와 그 주변지역의 지질구조)

  • Lee, Deok-Seon;Kang, Ji-Hoon
    • The Journal of the Petrological Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.287-307
    • /
    • 2012
  • The study area, which is located in the southeastern part of the Jirisan province of the Yeongnam massif, Korea, consists mainly of the Precambrian Hadong northern anorthosite complex (HNAC) and the Jirisan metamorphic rock complex (JMRC) and the Mesozoic granitoids which intrude them. Its tectonic frame is built into NS trend, unlike the general NE-trending tectonic frame of Korean Peninsula. This paper researched the structural characteristics at each deformation phase to clarify the geological structures associated with the NS-trending tectonic frame which was built in the HNAC and JMRC. The result indicates that the geological structures of this area were formed at least through three phases of deformation. (1) The $D_1$ deformation formed the $F_1$ sheath or "A"-type folds in the HNAC and JMRC, and the $S_{0-1}$ composite foliation and the $S_1$ foliation and the $D_1$ ductile shear zone which are (sub)parallel to the axial plane of $F_1$ fold, and the $L_1$ stretching lineation which is parallel to the $F_1$ fold axis owing to the large-scale top-to-the SE shearing on the $S_0$ foliation. (2) The $D_2$ deformation (re)folded the $D_1$ structural elements under the EW-trending tectonic compression environment, and formed the NS-trending $F_2$ open, tight, isoclinal, intrafolial folds with the $S_{0-1-2}$ composite foliation and the $S_2$ foliation and the $D_2$ ductile shear zone with S-C-C' structure and the $L_2$ stretching lineation which is (sub)parallel to the axial plane of $F_2$ fold. The extensive $D_2$ ductile shear zone (Hadong shear zone) of NS trend was persistently developed along the eastern boundary of HNAC and JMRC which would be to the limb of $F_2$ fold on a geological map scale. The Hadong shear zone is no less than 1.4 km width, and was formed in the mylonitization process which produced the mylonitic structure and the stretching lineation with the reduction of grain size during the $F_2$ passive folding. (3) The $D_3$ deformation formed the EW-trending $F_3$ kink or open fold under the NS-trending tectonic compression environment and partially rearranged the NS-trending pre-$D_3$ structural elements into (E)NE or (W)NW direction. The regional trend of $D_1$ tectonic frame before the $D_2$ deformation would be NE-SW unlike the present, and the NS-trending tectonic frame in the HNAC and JMRC like the present was formed by the rearrangement of the $D_1$ tectonic frame owing to the $F_2$ active and passive folding. Based on the main intrusion age of (N)NE-trending basic dyke in the study area, these three deformation events are interpreted to have occurred before the Late Paleozoic.

Antecedents of Manufacturer's Private Label Program Engagement : A Focus on Strategic Market Management Perspective (제조업체 Private Labels 도입의 선행요인 : 전략적 시장관리 관점을 중심으로)

  • Lim, Chae-Un;Yi, Ho-Taek
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.65-86
    • /
    • 2012
  • The $20^{th}$ century was the era of manufacturer brands which built higher brand equity for consumers. Consumers moved from generic products of inconsistent quality produced by local factories in the $19^{th}$ century to branded products from global manufacturers and manufacturer brands reached consumers through distributors and retailers. Retailers were relatively small compared to their largest suppliers. However, sometime in the 1970s, things began to slowly change as retailers started to develop their own national chains and began international expansion, and consolidation of the retail industry from mom-and-pop stores to global players was well under way (Kumar and Steenkamp 2007, p.2) In South Korea, since the middle of the 1990s, the bulking up of retailers that started then has changed the balance of power between manufacturers and retailers. Retailer private labels, generally referred to as own labels, store brands, distributors own private-label, home brand or own label brand have also been performing strongly in every single local market (Bushman 1993; De Wulf et al. 2005). Private labels now account for one out of every five items sold every day in U.S. supermarkets, drug chains, and mass merchandisers (Kumar and Steenkamp 2007), and the market share in Western Europe is even larger (Euromonitor 2007). In the UK, grocery market share of private labels grew from 39% of sales in 2008 to 41% in 2010 (Marian 2010). Planet Retail (2007, p.1) recently concluded that "[PLs] are set for accelerated growth, with the majority of the world's leading grocers increasing their own label penetration." Private labels have gained wide attention both in the academic literature and popular business press and there is a glowing academic research to the perspective of manufacturers and retailers. Empirical research on private labels has mainly studies the factors explaining private labels market shares across product categories and/or retail chains (Dahr and Hoch 1997; Hoch and Banerji, 1993), factors influencing the private labels proneness of consumers (Baltas and Doyle 1998; Burton et al. 1998; Richardson et al. 1996) and factors how to react brand manufacturers towards PLs (Dunne and Narasimhan 1999; Hoch 1996; Quelch and Harding 1996; Verhoef et al. 2000). Nevertheless, empirical research on factors influencing the production in terms of a manufacturer-retailer is rather anecdotal than theory-based. The objective of this paper is to bridge the gap in these two types of research and explore the factors which influence on manufacturer's private label production based on two competing theories: S-C-P (Structure - Conduct - Performance) paradigm and resource-based theory. In order to do so, the authors used in-depth interview with marketing managers, reviewed retail press and research and presents the conceptual framework that integrates the major determinants of private labels production. From a manufacturer's perspective, supplying private labels often starts on a strategic basis. When a manufacturer engages in private labels, the manufacturer does not have to spend on advertising, retailer promotions or maintain a dedicated sales force. Moreover, if a manufacturer has weak marketing capabilities, the manufacturer can make use of retailer's marketing capability to produce private labels and lessen its marketing cost and increases its profit margin. Figure 1. is the theoretical framework based on a strategic market management perspective, integrated concept of both S-C-P paradigm and resource-based theory. The model includes one mediate variable, marketing capabilities, and the other moderate variable, competitive intensity. Manufacturer's national brand reputation, firm's marketing investment, and product portfolio, which are hypothesized to positively affected manufacturer's marketing capabilities. Then, marketing capabilities has negatively effected on private label production. Moderating effects of competitive intensity are hypothesized on the relationship between marketing capabilities and private label production. To verify the proposed research model and hypotheses, data were collected from 192 manufacturers (212 responses) who are producing private labels in South Korea. Cronbach's alpha test, explanatory / comfirmatory factor analysis, and correlation analysis were employed to validate hypotheses. The following results were drawing using structural equation modeling and all hypotheses are supported. Findings indicate that manufacturer's private label production is strongly related to its marketing capabilities. Consumer marketing capabilities, in turn, is directly connected with the 3 strategic factors (e.g., marketing investment, manufacturer's national brand reputation, and product portfolio). It is moderated by competitive intensity between marketing capabilities and private label production. In conclusion, this research may be the first study to investigate the reasons manufacturers engage in private labels based on two competing theoretic views, S-C-P paradigm and resource-based theory. The private label phenomenon has received growing attention by marketing scholars. In many industries, private labels represent formidable competition to manufacturer brands and manufacturers have a dilemma with selling to as well as competing with their retailers. The current study suggests key factors when manufacturers consider engaging in private label production.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.