• Title/Summary/Keyword: Operation Cost

Search Result 3,770, Processing Time 0.04 seconds

A Study on Major Safety Problems and Improvement Measures of Personal Mobility (개인형 이동장치의 안전 주요 문제점 및 개선방안 연구)

  • Kang, Seung Shik;Kang, Seong Kyung
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.1
    • /
    • pp.202-217
    • /
    • 2022
  • Purpose: The recent increased use of Personal Mobility (PM) has been accompanied by a rise in the annual number of accidents. Accordingly, the safety requirements for PM use are being strengthened, but the laws/systems, infrastructure, and management systems remain insufficient for fostering a safe environment. Therefore, this study comprehensively searches the main problems and improvement methods through a review of previous studies that are related to PM. Then the priorities according to the importance of the improvement methods are presented through the Delphi survey. Method: The research method is mainly composed of a literature study and an expert survey (Delphi survey). Prior research and improvement cases (local governments, government departments, companies, etc.) are reviewed to derive problems and improvements, and a problem/improvement classification table is created based on keywords. Based on the classification contents, an expert survey is conducted to derive a priority improvement plan. Result: The PM-related problems were in 'non-compliance with traffic laws, lack of knowledge, inexperienced operation, and lack of safety awareness' in relation to human factors, and 'device characteristics, road-drivable space, road facilities, parking facilities' in relation to physical factors. 'Management/supervision, product management, user management, education/training' as administrative factors and legal factors are divided into 'absence/sufficiency of law, confusion/duplication, reduced effectiveness'. Improvement tasks related to this include 'PM education/public relations, parking/return, road improvement, PM registration/management, insurance, safety standards, traffic standards, PM device safety, PM supplementary facilities, enforcement/management, dedicated organization, service providers, management system, and related laws/institutional improvement', and 42 detailed tasks are derived for these 14 core tasks. The results for the importance evaluation of detailed tasks show that the tasks with a high overall average for the evaluation items of cost, time, effect, urgency, and feasibility were 'strengthening crackdown/instruction activities, education publicity/campaign, truancy PM management, and clarification of traffic rules'. Conclusion: The PM market is experiencing gradual growth based on shared services and a safe environment for PM use must be ensured along with industrial revitalization. In this respect, this study seeks out the major problems and improvement plans related to PM from a comprehensive point of view and prioritizes the necessary improvement measures. Therefore, it can serve as a basis of data for future policy establishment. In the future, in-depth data supplementation will be required for each key improvement area for practical policy application.

Characteristics of the construction process, the history of use and performed rituals of Gyeongungung Heungdeokjeon (경운궁 흥덕전의 조영 및 사용 연혁과 설행된 의례의 특징)

  • LIM, Cholong;JOO, Sanghun
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.1
    • /
    • pp.281-304
    • /
    • 2022
  • Heungdeokjeon was the first pavilion built on the site of Sueocheong during the expansion of Gyeongungung. In this study, we tried to clarify the specific construction process of Heungdeokjeon, which was used for various purposes such as the copy location for Portraits of ancestors, temporary enshrinement site, and the funeral building for the rest of the body, which is Binjeon. In addition, we tried to confirm the historical value based on the characteristics derived by the history of the building and the rituals performed. Heungdeokjeon began to be built in the second half of 1899, and is estimated to have been completed between mid-February and mid-March 1900. It was a ritual facility equipped with waiting rooms for the emperor and royal ladies as an annex. The relocation work was planned in April 1901 and began in earnest after June, and it was closely linked to the construction of attached buildings of Seonwonjeon. In addition, comparing the records on the construction and relocation cost of Heungdeokjeon with those related to the reconstruction of Seonwonjeon, it was confirmed that annex buildings of Heungdeokjeon were relocated and used as annex buildings of Seonwonjeon. The characteristics identified in the process of Heungdeokjeon used as a place to copy portraits are as follows. First, it was used as a place to copy portraits twice in a short period of time. Second, it was the place where the first unprecedented works were carried out in relation to the copying of portraits. Third, the pavilion, which was specially built for imperial rituals, was used as a place to copy portraits. Since then, it has been used as a funeral building for the rest of the body, and features different from those of the previous period are identified. It was the building dedicated to rituals for use as Binjeon, and was also a multipurpose building for copying portraits. In other words, Heungdeokjeon, along with Gyeongbokgung Taewonjeon, is the building that shows the changes in the operation of Binjeon in the late Joseon Dynasty. Characteristics are also confirmed in portrait-related rituals performed at Heungdeokjeon. The first is that Jakheonlye was practiced frequently in a short period of time. The second is that the ancestral rites of Sokjeolje and Bunhyang in Sakmangil, which are mainly held in the provincial Jinjeon, were identified. This is a very rare case in Jinjeon of the palace. The last is that Jeonbae, jeonal, and Bongsim were implemented mutiple times. In conclusion, Heungdeokjeon can be said to be a very symbolic building that shows the intention of Gojong, who valued imperial rituals, and the characteristics of the reconstruction process of Gyeongungung.

Development of tracer concentration analysis method using drone-based spatio-temporal hyperspectral image and RGB image (드론기반 시공간 초분광영상 및 RGB영상을 활용한 추적자 농도분석 기법 개발)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun;Han, Eunjin;Kwon, Siyoon;Kim, Youngdo
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.8
    • /
    • pp.623-634
    • /
    • 2022
  • Due to river maintenance projects such as the creation of hydrophilic areas around rivers and the Four Rivers Project, the flow characteristics of rivers are continuously changing, and the risk of water quality accidents due to the inflow of various pollutants is increasing. In the event of a water quality accident, it is necessary to minimize the effect on the downstream side by predicting the concentration and arrival time of pollutants in consideration of the flow characteristics of the river. In order to track the behavior of these pollutants, it is necessary to calculate the diffusion coefficient and dispersion coefficient for each section of the river. Among them, the dispersion coefficient is used to analyze the diffusion range of soluble pollutants. Existing experimental research cases for tracking the behavior of pollutants require a lot of manpower and cost, and it is difficult to obtain spatially high-resolution data due to limited equipment operation. Recently, research on tracking contaminants using RGB drones has been conducted, but RGB images also have a limitation in that spectral information is limitedly collected. In this study, to supplement the limitations of existing studies, a hyperspectral sensor was mounted on a remote sensing platform using a drone to collect temporally and spatially higher-resolution data than conventional contact measurement. Using the collected spatio-temporal hyperspectral images, the tracer concentration was calculated and the transverse dispersion coefficient was derived. It is expected that by overcoming the limitations of the drone platform through future research and upgrading the dispersion coefficient calculation technology, it will be possible to detect various pollutants leaking into the water system, and to detect changes in various water quality items and river factors.

A Study on the Determinant of Capital Structure of Chinese Shipbuilding Industry (중국 조선기업 자본구조 결정요인에 관한 연구)

  • Jin, Siwen;Lee, Ki-Hwan;Kim, Myoung-Hee
    • Journal of Korea Port Economic Association
    • /
    • v.38 no.2
    • /
    • pp.81-93
    • /
    • 2022
  • Since 2008, China's shipping industry has been in a slump, with shipbuilding orders falling sharply, and high-growth excess capacity has become increasingly apparent, leaving many firms with sharply reduced orders at risk of bankruptcy and shutdown. To ensure the development of the shipbuilding industry and enhance the international competitiveness of the shipbuilding industry, it is necessary to analyze the present situation of the shipbuilding industry and the financial situation of the shipbuilding enterprises. And analyzing the problems faced by enterprises from the perspective of capital structure is very meaningful to the shipbuilders with high capital operation. We are trying to analyze the determinants of capital structure of China's shipbuilding listed companies. 30 listed Chinese shipbuilding and listed companies have been designated as sample companies that can obtain financial statements for 13 consecutive years. They also divided 30 sample companies into shipbuilding, shipbuilding-related manufacturing, and shipbuilding-related transportation. Dependent variable is the debt level of the year, independent variable includes the debt level of the previous year, fixed asset ratio, profitability ratio, depreciation cost ratio and asset size. The regression model of the panel used to analyze determinants is capital structure. The results of the empirical analysis are as follows. First, a fixed-effect model for the entire entity showed that the debt-to-equity ratio and the size of the asset in the previous period had a positive effect on the debt-to-equity ratio in the current period. Second, the impact of the profitability ratio on the debt level in the prior term also supports the capital procurement ranking theory rather than the static counter-conflict theory. Third, it was shown that the ratio of the depreciation of the prior term, which replaces the non-liability tax effect, affects the debt-to-equity ratio in the current period.

Preliminary Inspection Prediction Model to select the on-Site Inspected Foreign Food Facility using Multiple Correspondence Analysis (차원축소를 활용한 해외제조업체 대상 사전점검 예측 모형에 관한 연구)

  • Hae Jin Park;Jae Suk Choi;Sang Goo Cho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.121-142
    • /
    • 2023
  • As the number and weight of imported food are steadily increasing, safety management of imported food to prevent food safety accidents is becoming more important. The Ministry of Food and Drug Safety conducts on-site inspections of foreign food facilities before customs clearance as well as import inspection at the customs clearance stage. However, a data-based safety management plan for imported food is needed due to time, cost, and limited resources. In this study, we tried to increase the efficiency of the on-site inspection by preparing a machine learning prediction model that pre-selects the companies that are expected to fail before the on-site inspection. Basic information of 303,272 foreign food facilities and processing businesses collected in the Integrated Food Safety Information Network and 1,689 cases of on-site inspection information data collected from 2019 to April 2022 were collected. After preprocessing the data of foreign food facilities, only the data subject to on-site inspection were extracted using the foreign food facility_code. As a result, it consisted of a total of 1,689 data and 103 variables. For 103 variables, variables that were '0' were removed based on the Theil-U index, and after reducing by applying Multiple Correspondence Analysis, 49 characteristic variables were finally derived. We build eight different models and perform hyperparameter tuning through 5-fold cross validation. Then, the performance of the generated models are evaluated. The research purpose of selecting companies subject to on-site inspection is to maximize the recall, which is the probability of judging nonconforming companies as nonconforming. As a result of applying various algorithms of machine learning, the Random Forest model with the highest Recall_macro, AUROC, Average PR, F1-score, and Balanced Accuracy was evaluated as the best model. Finally, we apply Kernal SHAP (SHapley Additive exPlanations) to present the selection reason for nonconforming facilities of individual instances, and discuss applicability to the on-site inspection facility selection system. Based on the results of this study, it is expected that it will contribute to the efficient operation of limited resources such as manpower and budget by establishing an imported food management system through a data-based scientific risk management model.

Development of the paper bagging machine for grapes (휴대용 포도자동결속기 개발연구)

  • Park, K.H.;Lee, Y.C.;Moon, B.W.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.11 no.1
    • /
    • pp.79-94
    • /
    • 2009
  • The research project was conducted to develop a paper bagging machine for grape. This technology was aimed to highly reduce a labor for paper bagging in grape and bakery. In agriculture labor and farm population has rapidly decreased since 1980 in Korea so there was so limit in labor. In particular there is highly population in women and old age at rural area and thus labor cost is so high. Therefore a labor saving technology in agricultural sector might be needed to be replaced these old age with mechanical and labor saving tool in agriculture. The following was summarized of the research results for development of a paper bagging machine for grape. 1. Development of a new paper bagging machine for grape - This machine was designed by CATIA VI2/AUTO CAD2000 programme. - A paper bagging machine was mechanically binded a paper bag of grape which should be light and small size. This machine would be designed for women and old age with convenience during bagging work at the field site. - This machine was manufactured with total weight of less than 350g. - An overage bagging operation was more than 99% at the actual field process. - A paper bagging machine was designed with cartridge type which would be easily operated between rows and grape branches under field condition. - The type of cartridge pin was designed as a C-ring type with the length of 500mm which was good for bagging both grape and bakery. - In particular this machine was developed to easily operated among vines of the grape trees. 2. Field trials of a paper bagging machine in grape - There was high in grape quality as compared to the untreated control at the application of paper bagging machine. - The efficiency of paper bagging machine was 102% which was alternative tool for the conventional. - The roll pin of paper bagging machine was good with 5.3cm in terms of bagging precision. - There was no in grape quality between the paper bagging machine and the conventional method. - Disease infection and grape break was not in difference both treatments.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.