• Title/Summary/Keyword: product cost

Search Result 2,117, Processing Time 0.033 seconds

토양 및 지하수 Investigation 과 Remediation에 대한 현장적용

  • Wallner, Heinz
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2000.11a
    • /
    • pp.44-63
    • /
    • 2000
  • Situated close to Heathrow Airport, and adjacent to the M4 and M25 Motorways, the site at Axis Park is considered a prime location for business in the UK. In consequnce two of the UK's major property development companies, MEPC and Redrew Homes sought the expertise of Intergeo to remediate the contaminated former industrial site prior to its development. Industrial use of the twenty-six hectare site, started in 1936, when Hawker Aircraft commence aircraft manufacture. In 1963 the Firestone Tyre and Rubber Company purchased part of the site. Ford commenced vehicle production at the site in the mid-1970's and production was continued by Iveco Ford from 1986 to the plant's decommissioning in 1997. Geologically the site is underlain by sand and gravel, deposited in prehistory by the River Thames, with London Clay at around 6m depth. The level of groundwater fluctuates seasonally at around 2.5m depth, moving slowly southwest towards local streams and watercourses. A phased investigation of the site was undertaken, which culminated in the extensive site investigation undertaken by Intergeo in 1998. In total 50 boreholes, 90 probeholes and 60 trial pits were used to investigate the site and around 4000 solid and 1300 liquid samples were tested in the laboratory for chemical substances. The investigations identified total petroleum hydrocarbons in the soil up to 25, 000mg/kg. Diesel oil, with some lubricating oil were the main components. Volatile organic compounds were identified in the groundwater in excess of 10mg/l. Specific substances included trichloromethane, trichloromethane and tetrachloroethene. Both the oil and volatile compounds were widely spread across the site, The specific substances identified could be traced back to industrial processes used at one or other dates in the sites history Slightly elevated levels of toxic metals and polycyclic aromatic hydrocarbons were also identified locally. Prior to remediation of the site and throughout its progress, extensive liaison with the regulatory authorities and the client's professional representatives was required. In addition to meetings, numerous technical documents detailing methods and health and safety issues were required in order to comply with UK environmental and safety legislation. After initially considering a range of options to undertake remediation, the following three main techniques were selected: ex-situ bioremediation of hydrocarbon contaminated soils, skimming of free floating hydrocarbon product from the water surface at wells and excavations and air stripping of volatile organic compounds from groundwater recovered from wells. The achievements were as follows: 1) 350, 000m3 of soil was excavated and 112, 000m3 of sand and gravel was processed to remove gravel and cobble sized particles; 2) 53, 000m3 of hydrocarbon contaminated soil was bioremediated in windrows ; 3) 7000m3 of groundwater was processed by skimming to remove free floating Product; 4) 196, 000m3 of groundwater was Processed by air stripping to remove volatile organic compounds. Only 1000m3 of soil left the site for disposal in licensed waste facilities Given the costs of disposal in the UK, the selected methods represented a considerable cost saving to the Clients. All other soil was engineered back into the ground to a precise geotechnical specification. The following objective levels were achieved across the site 1) By a Risk Based Corrective Action (RBCA) methodology it was demonstrated that soil with less that 1000mg/kg total petroleum hydrocarbons did not pose a hazard to health or water resources and therefore, could remain insitu; 2) Soils destined for the residential areas of the site were remediated to 250mg/kg total petroleum hydrocarbons; in the industrial areas 500mg/kg was proven acceptable. 3) Hydrocarbons in groundwater were remediated to below the Dutch Intervegtion Level of 0.6mg/1; 4) Volatile organic compounds/BTEX group substances were reduced to below the Dutch Intervention Levels; 5) Polycyclic aromatic hydrocarbons and metals were below Inter-departmental Committee for the Redevelopment of Contaminated Land guideline levels for intended enduse. In order to verify the qualify of the work 1500 chemical test results were submitted for the purpose of validation. Quality assurance checks were undertaken by independent consultants and at an independent laboratory selected by Intergeo. Long term monitoring of water quality was undertaken for a period of one year after remediation work had been completed. Both the regulatory authorities and Clients representatives endorsed the quality of remediation now completed at the site. Subsequent to completion of the remediation work Redrew Homes constructed a prestige housing development. The properties at "Belvedere Place" retailed at premium prices. On the MEPC site the Post Office, amongst others, has located a major sorting office for the London area. Exceptionally high standards of remediation, control and documentation were a requirement for the work undertaken here.aken here.

  • PDF

A Store Recommendation Procedure in Ubiquitous Market for User Privacy (U-마켓에서의 사용자 정보보호를 위한 매장 추천방법)

  • Kim, Jae-Kyeong;Chae, Kyung-Hee;Gu, Ja-Chul
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

Antecedents of Manufacturer's Private Label Program Engagement : A Focus on Strategic Market Management Perspective (제조업체 Private Labels 도입의 선행요인 : 전략적 시장관리 관점을 중심으로)

  • Lim, Chae-Un;Yi, Ho-Taek
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.65-86
    • /
    • 2012
  • The $20^{th}$ century was the era of manufacturer brands which built higher brand equity for consumers. Consumers moved from generic products of inconsistent quality produced by local factories in the $19^{th}$ century to branded products from global manufacturers and manufacturer brands reached consumers through distributors and retailers. Retailers were relatively small compared to their largest suppliers. However, sometime in the 1970s, things began to slowly change as retailers started to develop their own national chains and began international expansion, and consolidation of the retail industry from mom-and-pop stores to global players was well under way (Kumar and Steenkamp 2007, p.2) In South Korea, since the middle of the 1990s, the bulking up of retailers that started then has changed the balance of power between manufacturers and retailers. Retailer private labels, generally referred to as own labels, store brands, distributors own private-label, home brand or own label brand have also been performing strongly in every single local market (Bushman 1993; De Wulf et al. 2005). Private labels now account for one out of every five items sold every day in U.S. supermarkets, drug chains, and mass merchandisers (Kumar and Steenkamp 2007), and the market share in Western Europe is even larger (Euromonitor 2007). In the UK, grocery market share of private labels grew from 39% of sales in 2008 to 41% in 2010 (Marian 2010). Planet Retail (2007, p.1) recently concluded that "[PLs] are set for accelerated growth, with the majority of the world's leading grocers increasing their own label penetration." Private labels have gained wide attention both in the academic literature and popular business press and there is a glowing academic research to the perspective of manufacturers and retailers. Empirical research on private labels has mainly studies the factors explaining private labels market shares across product categories and/or retail chains (Dahr and Hoch 1997; Hoch and Banerji, 1993), factors influencing the private labels proneness of consumers (Baltas and Doyle 1998; Burton et al. 1998; Richardson et al. 1996) and factors how to react brand manufacturers towards PLs (Dunne and Narasimhan 1999; Hoch 1996; Quelch and Harding 1996; Verhoef et al. 2000). Nevertheless, empirical research on factors influencing the production in terms of a manufacturer-retailer is rather anecdotal than theory-based. The objective of this paper is to bridge the gap in these two types of research and explore the factors which influence on manufacturer's private label production based on two competing theories: S-C-P (Structure - Conduct - Performance) paradigm and resource-based theory. In order to do so, the authors used in-depth interview with marketing managers, reviewed retail press and research and presents the conceptual framework that integrates the major determinants of private labels production. From a manufacturer's perspective, supplying private labels often starts on a strategic basis. When a manufacturer engages in private labels, the manufacturer does not have to spend on advertising, retailer promotions or maintain a dedicated sales force. Moreover, if a manufacturer has weak marketing capabilities, the manufacturer can make use of retailer's marketing capability to produce private labels and lessen its marketing cost and increases its profit margin. Figure 1. is the theoretical framework based on a strategic market management perspective, integrated concept of both S-C-P paradigm and resource-based theory. The model includes one mediate variable, marketing capabilities, and the other moderate variable, competitive intensity. Manufacturer's national brand reputation, firm's marketing investment, and product portfolio, which are hypothesized to positively affected manufacturer's marketing capabilities. Then, marketing capabilities has negatively effected on private label production. Moderating effects of competitive intensity are hypothesized on the relationship between marketing capabilities and private label production. To verify the proposed research model and hypotheses, data were collected from 192 manufacturers (212 responses) who are producing private labels in South Korea. Cronbach's alpha test, explanatory / comfirmatory factor analysis, and correlation analysis were employed to validate hypotheses. The following results were drawing using structural equation modeling and all hypotheses are supported. Findings indicate that manufacturer's private label production is strongly related to its marketing capabilities. Consumer marketing capabilities, in turn, is directly connected with the 3 strategic factors (e.g., marketing investment, manufacturer's national brand reputation, and product portfolio). It is moderated by competitive intensity between marketing capabilities and private label production. In conclusion, this research may be the first study to investigate the reasons manufacturers engage in private labels based on two competing theoretic views, S-C-P paradigm and resource-based theory. The private label phenomenon has received growing attention by marketing scholars. In many industries, private labels represent formidable competition to manufacturer brands and manufacturers have a dilemma with selling to as well as competing with their retailers. The current study suggests key factors when manufacturers consider engaging in private label production.

  • PDF

A Study on Efficiently Designing Customer Rewards Programs (고객 보상프로그램의 효율적 구성에 관한 연구)

  • Kim, Sang-Cheol
    • Journal of Distribution Science
    • /
    • v.10 no.1
    • /
    • pp.5-10
    • /
    • 2012
  • Currently, the rewards programs offered by many companies to strengthen customer relationships have been working quite well. In addition, many companies' rewards programs, designed for stabilizing revenue, are recognized to be effective. However, these rewards programs are not significantly differentiated between companies and there are no accurate conclusions currently, which can be made about their effects. Because of this, a company with a customer rewards program may not comprehend the true level of active participation. In this environment some companies' rewards programs inadvertently hinder business profitability as a side effect while attempting to increase customer loyalty. In fact, airline and oil companies pass on the financial cost of their programs to the customer, and as a result, they have been criticized publicly. The result of this is that the corporations with bad rewards programs tend to get a bad image. In this study of stores' rewards programs, we centered our focus on the design of the program. The main problem in this study is to recognize the financial value of the rewards program and whether it can create a competitive edge for the companies despite the cost issues experienced by them. Customers receiving financial rewards for their business may be just as satisfied with a particular company or store versus those who are not, and the program, perhaps, does not form a distinctive competitive advantage. When the customer is deciding between competing companies to secure their product needs with, we wanted to figure out how much of an affect a valuable reward program had on their decision making. To evaluate this, we set the first hypothesis as, "based on the level of involvement of the customers, there is a difference between customers' preferences for rewards programs." In the results of Experiment 1 we saw that in a financial compensation program for high-involvement groups and low-involvement groups, significant differences appeared and Hypothesis 1 was partially supported. As for the second hypothesis that "customers will have different preferences between a financial rewards programs (SE) and a joint rewards programs (JE)," the analysis showed that the preference for JE was significantly higher than that for other programs. In addition, through Experiment 2, we were able to find meaningful results, which revealed that consumers have shown a significant difference in their preferences between SE and JE. The purpose of these experiments was to enable the designing of a rewards program by learning how to enhance service information distribution and strengthen customer relationships. From the results, there should be a great amount of value for future service-related endeavors and academic research programs. The research is significant, because the results can be found to have a positive effect on reward program designs however, it does have the following limitations. First, this study was performed using an experiment, and all experiments have limitations. Second, although there was an individual evaluation and a joint evaluation, setting a proper evaluation criteria was difficult. In this study, 1,000 Korean won (KRW) in the individual evaluation had a value of 2 points, and, in the joint evaluation, 1,000 KRW had a value of 1 point. There may have been alternative ways to differentiate the evaluations to obtain the proper results. In this study, since there was no funding, the experiments were performed orally however, this was complementary to the study. Third, the subjects who participated in this experiment were students. Conducting this study through experimentation was unavoidable for us, and future research should be conducted using an actual program with the target customers.

  • PDF

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

A study of SCM strategic plan: Focusing on the case of LG electronics (공급사슬 관리 구축전략에 관한 연구: LG전자 사례 중심으로)

  • Lee, Gi-Wan;Lee, Sang-Youn
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.83-94
    • /
    • 2011
  • Most domestic companies, with the exclusion of major firms, are reluctant to implement a supply chain management (SCM) network into their operations. Most small- and medium-sized enterprises are not even aware of SCM. Due to the inherent total-systems efficiency of SCM, it coordinates domestic manufacturers, subcontractors, distributors, and physical distributors and cuts down on cost of inventory control, as well as demand management. Furthermore, a lack of SCM causes a decrease in competitiveness for domestic companies. The reason lies in the fundamentality of SCM, which is the characteristic of information sharing, process innovation throughout SCM, and the vast range of problems the SCM management tool is able to address. This study suggests the contemplation and reformation of the current SCM situation by analyzing the SCM strategic plan, discourses and logical discussions on the topic, and a successful case for adapting SCM; hence, the study plans to productively "process" SCM. First, it is necessary to contemplate the theoretical background of SCM before discussing how to successfully process SCM. I will describe the concept and background of SCM in Chapter 2, with a definition of SCM, types of SCM promotional activities, fields of SCM, necessity of applying SCM, and the effects of SCM. All of the defects in currently processing SCM will be introduced in Chapter 3. Discussion items include the following: the Bullwhip Effect; the breakdown in supply chain and sales networks due to e-business; the issue that even though the key to a successful SCM is cooperation between the production and distribution company, during the process of SCM, the companies, many times, put their profits first, resulting in a possible defect in demands estimation. Furthermore, the problems of processing SCM in a domestic distribution-production company concern Information Technology; for example, the new system introduced to the company is not compatible with the pre-existing document architecture. Second, for effective management, distribution and production companies should cooperate and enhance their partnership in the aspect of the corporation; however, in reality, this seldom occurs. Third, in the aspect of the work process, introducing SCM could provoke corporations during the integration of the distribution-production process. Fourth, to increase the achievement of the SCM strategy process, they need to set up a cross-functional team; however, many times, business partners lack the cooperation and business-information sharing tools necessary to effect the transition to SCM. Chapter 4 will address an SCM strategic plan and a case study of LG Electronics. The purpose of the strategic plan, strategic plans for types of business, adopting SCM in a distribution company, and the global supply chain process of LG Electronics will be introduced. The conclusion of the study is located in Chapter 5, which addresses the issue of the fierce competition that companies currently face in the global market environment and their increased investment in SCM, in order to better cope with short product life cycle and high customer expectations. The SCM management system has evolved through the adaptation of improved information, communication, and transportation technologies; now, it demands the utilization of various strategic resources. The introduction of SCM provides benefits to the management of a network of interconnected businesses by securing customer loyalty with cost and time savings, derived through the consolidation of many distribution systems; additionally, SCM helps enterprises form a wide range of marketing strategies. Thus, we could conclude that not only the distributors but all types of businesses should adopt the systems approach to supply chain strategies. SCM deals with the basic stream of distribution and increases the value of a company by replacing physical distribution with information. By the company obtaining and sharing ready information, it is able to create customer satisfaction at the end point of delivery to the consumer.

  • PDF

The Making of Artistic Fame:The Case of Korean Handicraft Artists (예술가 명성(fame) 형성 요인에 관한 연구: 국내 공예작가의 사례를 중심으로)

  • Choe, Youngshin;Hyun, Eunjung
    • Review of Culture and Economy
    • /
    • v.21 no.2
    • /
    • pp.141-173
    • /
    • 2018
  • In this article, we explore how artistic fame is formed by analyzing antecedents of fame the extent to which the name of an actor or his/her work is positively known by his/her audiences among Korean handicraft artists. Drawing on prior literature on reputation and fame, we clarify the differences between the concept of reputation and the concept of fame and further distinguish three types of reputation among individual artists, depending on its sources expert reputation, market reputation, and peer reputation. We employ the mixed method in this study, in which we first conducted open-end interviews with three kinds of constituents (i.e., critics, market intermediaries, and artists) and then developed and tested the hypotheses derived from the insights we had obtained from the interviews. We further considered the impact of reputational work, defined as the level of effort devoted and activities performed by an artist him(her)self geared toward promoting his(her) work, on artistic fame. We find that there are large differences in factors associated with artistic fame between non elite and elite Korean handicraft artist groups, where elite status is captured by artists' educational background (i.e., Seoul National University and Hongik University, which are considered elite schools in accordance with prior research). Specifically, findings suggest that among non elite status artists, recognition by experts, or what we call expert reputation, acquired through national awards and invitations from prominent exhibitions as well as artists' own reputational work that incurs high cost, such as self-financed exhibition openings, were shown to be highly significant factors associated with artistic fame, which was measured as the number of media exposures related to her/his art work. By contrast, among elite status artists, peer reputation acquired through an artist's institutional affiliations and relatively low cost artists' own reputational work, such as self listing on a highly publicized magazine, were shown to be significant factors associated with fame. Taken together, this paper contributes to research on cultural industries and markets by highlighting the importance of understanding artistic fame not just as the outcome of her/his talent but as the social product that arises at the intersection of actors (artists) and her/his audiences in the social evaluation process.

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

Social division of labor in the traditional industry district - foursed on Damyang bamboo ware industry of Damyang and Yeoju pottery industry of Yeoju, South Korea (우리나라 재래공업 산지의 사회적 분업 - 담양죽제품과 여주 도자기 산지를 사례로 -)

  • ;;;Park, Yang-Choon;Lee, Chul-Woo;Park, Soon-Ho
    • Journal of the Korean Geographical Society
    • /
    • v.30 no.3
    • /
    • pp.269-295
    • /
    • 1995
  • This research is concerned with the social division of labor within the traditional industry district: Damyang bamboo ware industry district and Yeoju pottery industry district in South Korea, Damyang bamboo ware and Yeoju pottery are well known of the Korean traditional industry. The social division of labor in an industry district is considered as an important factor. The social division of labor helps the traditional industry to survive today. This summary shows five significant points from the major findings. First, Damyang bamoo ware industry and Yoeju pottery industry have experienced the growth stages until 1945, the stagnation in the 1960s, and the business recovery in the 1980s. Most Korean traditional industries had been radically declined under the Japanese colonization; while, Damyang bamboo ware industry and Yeoju pottery industry district have been developed during above all stages. The extended market to Japan helped the local government to establish a training center, and to provide financial aids and technical aids to crafts men. During the 1960s and 1970s, mass production of substitute goods on factory system resulted in the decrease of demand of bamboo ware and pettery. During the 1980s, these industries have slowly recovered as a result of the increased income per capita. The high rate of economic growth in the 1960s and 1970s was playing an important role in the emerging the incleased demand of the bamboo ware and pottery. Second the production-and-marketing system in a traditional industry district became diversified to adjust the demand of products. In Damyang bamboo ware industry district, the level of social division of labor was low until the high economic development period. Bamboo ware were made by a farmer in a small domestic system, The bamboo goods were mainly sold in the periodic market of bamboo ware in Damyang. In the recession period in the 1960s and 1970s, the production-and-marketing system were diversified; a manufacturing-wholesale type business and small-factory type business became established; and the wholesale business and the export traders in the district appeared. In the recovery period in the 1980s, the production-and-marketing systems were more diversified; a small-factory type business started to depend On subcontractors for a part of process of production; and a wholesale business in the district engaged in production of bamboo ware. In Yeoju pottery industry district, the social division of labor was limited until the early 1970s. A pottery was made by a crafts man in a small-business of domestic system and sold by a middle man out of Yeoju. Since the late 1970s, production-and-marketing system become being diversified as a result of the increased demand in Japan and South Korea. In the 1970s, Korean traditional craft pottery was highiy demanded in Japan. The demand encouraged people in Yoeju to become craftsmen and/or to work in the pottery related occupation. In South Korea, the rapid economic growth resulted in incline to pottery due to the development of stainless and plastic bowls and dishes. The production facilities were modernized to provide pottery at the reasonable price. A small-busineas of domestic system was transformed into a small-factory type business. The social division of labor was intensified in the pottery production-and-maketing system. The manufacturing kaoline began to be seperated from the production process of pottery. Within the district, a pottery wholesale business and a retail business started to be established in the 1980s. Third the traditional industry district was divided into "completed one" and "not-completed one" according to whether or not the district firms led the function of the social division of labor. The Damyang bamboo ware industry district is "completed one": the firm within the district is in charge of the supply of raw material, the production and the marketing. In the Damyang bamboo ware district, the social division of labor w and reorganized labor system to improve the external economics effect through intensifying the social division of labor. Lastly, the social division of labor was playing an important role in the development of traditional industry districts. The subdivision of production process and the diversification of business reduced the production cost and overcame the labor shortage through hiring low-waged workers such as family members, the old people and housewives. An enterpriser with small amount of capital easily joined into the business. The risk from business recession were dispersed. The accumulated know-how in the production and maketing provided flexiblility to produce various goods and to extend the life-cycly of a product.d the life-cycly of a product.

  • PDF

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.