• Title/Summary/Keyword: 결정론적 모형

Search Result 348, Processing Time 0.027 seconds

SI업체를 가진 그룹내 계열사들의 외주 위탁 전략에 관한 연구

  • 이재남;김영걸
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1995.09a
    • /
    • pp.16-42
    • /
    • 1995
  • 변환기에 있는 국내 외주위탁 시장의 특징은 대기업을 중심으로 시장이 형 성되어 있다는 것이다. 즉, 전체 시장규모의 약 절반 가량을 대기업들의 그 룹내 정보시스템 관리 비용이 차지하고 있으며, 자본금 10억 이상의 대형 외 주업체가 전체 70%정도를 점유하고 있다. 대부분의 그룹들은 계열사들의 정 보 시스템 부문을 통합하여 시스템 통합 회사를 설립하고, 이를 통해 내부 정보 시스템의 관리 및 운영을 실시하고 있다. 따라서, 계열사들은 기업의 환경과 요구 사항에 관계없이 무조건적으로 그룹내 시스템 통합 업체에 정 보 시스템을 외주 위탁하여 왔다. 하지만, 기업의 미래는 정보 시스템이 제 공하는 정보의 질에 의해 좌우된다고 해도 과언이 아니므로, 계열사들은 외 주위탁을 위해 그룹내 시스템 통합 업체 뿐 아니라 전문 기술과 경험을 가 지고 있는 외부 회사들을 고려하여야 한다. 즉, 기업의 환경과 필요한 정보 기술에 적합한 외주 위탁 회사를 선정하도록 하여야 한다. 계열사들이 기업 내부의 상황과 그룹의 환경에 따라 최적의 외주위탁 전략을 결정하도록 하 기 위해 본 연구에서는 두 가지 요소 -조직의 정보강도, 그룹의 영향지수-를 도입하였다. 이 요소들을 사용하여 계열사들의 독특한 업무의 환경에 적합한 외주위탁 상황 모형을 제시하고, 제시된 모델의 적합성 여부를 검증하기 위 해 시스템 통합 업체를 가진 국내 대기업의 사례들을 분석, 평가해 보았다. 6개의 그룹에서 11개의 계열사들을 선정하여 각 계열사들의 현재의 외주위 탁 전략, 조직의 정보강도, 그룹의 영향요소 및 정보 시스템에 대한 사용자 만족도를 상위 관리자들과의 인터뷰를 통해 조사하였다. 이 사례 연구들의 결과는 각 계열사들의 상황에 따라 제시된 외주위탁 전략과 현재의 외주위 탁 전략이 일치할 때 정보 시스템에 대한 사용자 만족도가 보다 높은 것으 로 나타났다. 할 수 있는 효율적인 distributed system를 개발하는 것을 제시하였다. 본 논문은 데이타베이스론의 입장에서 아직 정립되어 있지 않은 분산 환경하에서의 관계형 데이타베이스의 데이타관리의 분류체계를 나름대로 정립하였다는데 그 의의가 있다. 또한 이것의 응용은 현재 분산데이타베이스 구축에 있어 나타나는 기술적인 문제점들을 어느정도 보완할 수 있다는 점에서 그 중요성이 있다.ence of a small(IxEpc),hot(Tex> SOK) core which contains two tempegatlue peaks at -15" east and north of MDS. The column density of HCaN is (1-3):n1014cm-2. Column density at distant position from MD5 is larger than that in the (:entral region. We have deduced that this hot-core has a mass of 10sR1 which i:s about an order of magnitude larger those obtained by previous studies.previous studies.업순서들의 상관관계를 고려하여 보다 개선된 해를 구하기 위한 연구가 요구된다. 또한, 준비작업비용을 발생시키는 작업장의 작업순서결정에 대해서도 연구를 행하여, 보완작업비용과 준비비용을 고려한 GMMAL 작업순서문제를 해결하기 위한 연구가 수행되어야 할 것이다.로 이루어 져야 할 것이다.태를 보다 효율적으로 증진시킬 수 있는 대안이 마련되어져야 한다고 사료된다.$\ulcorner$순응$\lrcorner$의 범위를 벗어나지 않는다. 그렇기 때문에도 $\ulcorner$순응$\lrcorner$

  • PDF

Real Option Study on Cookstove Offset Project under Emission Allowance Price Uncertainty (배출권 가격 불확실성을 고려한 고효율 쿡스토브 보급사업 실물옵션 연구)

  • Lee, Jaehyung
    • Environmental and Resource Economics Review
    • /
    • v.29 no.2
    • /
    • pp.219-246
    • /
    • 2020
  • From the Phase II (2018~2020) of K-ETS, the offset credit from 'CDM projects that domestic companies and others have carried out in foreign countries' can be used in the K-ETS. As a result, stakeholders in the K-ETS market are actively developing overseas CDM projects, such as the 'high-efficiency cook stove project'. which can secure a large amount of credits while marginal cost is relatively low. This paper develops the investment decision-making model of offset project for the 'high-efficiency cook stove project' using the real option approach. Under the uncertainty of the emission allowance price, the optimal investment threshold (p) is derived and sensitivity analysis is conducted. As a result, in the standard scenario (PoA-S), the optimal investment threshold is 29,054won/ton, which is lower than the stock price (pspot). However, allocation entities are not only economics in the CDM project, but also CDM risk factors such as non-renewable biomass ratio, cook stove replacement ratio, equity ratio with host country, investment period and submission limitation of emission allowance. In addition, offset project developers will be able to derive the optimal investment threshold for each business stage and use it for economic feasibility checks.

Prediction of Housing Price Index Using Artificial Neural Network (인공신경망을 이용한 주택가격지수 예측)

  • Lee, Jiyoung;Ryu, Jae Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.228-234
    • /
    • 2021
  • Real estate market participants need to have a sense of predicting real estate prices in decision-making. Commonly used methodologies, such as regression analysis, ARIMA, and VAR, have limitations in predicting the value of an asset, which fluctuates due to unknown variables. Therefore, to mitigate the limitations, an artificial neural was is used to predict the price trend of apartments in Seoul, the hottest real estate market in South Korea. For artificial neural network learning, the learning model is designed with 12 variables, which are divided into macro and micro factors. The study was conducted in three ways: (Ed note: What is the difference between case 1 and 2? Is case 1 micro factors?)CASE1 with macro factors, CASE2 with macro factors, and CASE3 with the combination of both factors. As a result, CASE1 and CASE2 show 87.5% predictive accuracy during the two-year experiment, and CASE3 shows 95.8%. This study defines various factors affecting apartment prices in macro and microscopic terms. The study also proposes an artificial network technique in predicting the price trend of apartments and analyzes its effectiveness. Therefore, it is expected that the recently developed learning technique can be applied to the real estate industry, enabling more efficient decision-making by market participants.

Estimating Land Assets in North Korea: Framework Development & Exploratory Application (북한지역 토지자산 추정에 관한 연구: 프레임워크 개발 및 탐색적 적용)

  • Lim, Song
    • Economic Analysis
    • /
    • v.27 no.2
    • /
    • pp.71-123
    • /
    • 2021
  • In this study, we present a methodology and model to estimate land prices and the value of land assets in North Korea in the absence of any data about land characteristics from North Korean authorities. Using this framework, we experimentally make market price-based estimates for land assets across the entire urban area of North Korea. First, we estimate the determinants of land prices in South Korea using data on market prices of land from the late 1970s, when it was estimated that the income level gap between South Korea and North Korea wasn't relatively large, and from the early 1980s, when urbanization levels in both of them were similar. Second, we calculate land prices and their relative ratios for each city and urban area in North Korea around 2015 by substituting proxy variables of determinants of land prices derived through a geographic information analysis of North Korea into the function of land prices that we have already estimated. Finally, we estimate the value of land assets in urban areas across North Korea by combining the ratio of housing transaction prices surveyed in several cities in North Korea with the relative prices estimated in this research. As a result, land prices in urban areas in North Korea, looking at the relative ratio of price by city, are estimated to be the highest, at 100.00, in Tongdaewon district of Pyongyang, and to be the lowest, at 1.70, in Phungso county, Ryanggang Province. Meanwhile, the value of land assets in urbanized areas was estimated at $21.6 billion in 2015, which was 1.2 to 1.3 times the GDP of North Korea that year. This ratio is similar to South Korea's in the 1978-1980 period, when the South Korean economy grew at an average rate of 6%. Considering North Korea's growth rate of about 1% in the 2013-2014 period, its ratio of land assets to GDP appears very high.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.

An Evaluation Model for Software Usability using Mental Model and Emotional factors (정신모형과 감성 요소를 이용한 소프트웨어 사용성 평가 모델 개발)

  • 김한샘;김효영;한혁수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.117-128
    • /
    • 2003
  • Software usability is a characteristic of the software that is decided based on learnability, effectiveness, and satisfaction when it is evaluated. The usability is a main factor of the software quality. A software has to be continuously improved by taking guidelines that comes from the usability evaluation. Usability factors may vary among the different software products and even for the same factor, the users may have different opinions according to their experience and knowledge. Therefore, a usability evaluation process must be developed with the consideration of many factors like various applications and users. Existing systems such as satisfaction evaluation and performance evaluation only evaluate the result and do not perform cause analysis. And also unified evaluation items and contents do not reflect the characteristics of the products. To address these problems, this paper presents a evaluation model that is based on the mental model of user and the problems, this paper presents a evaluation model that is based on the mental model of user and the emotion of users. This model uses evaluation factors of the user task which are extracted by analyzing usage of the target product. In the mental model approach, the conceptual model of designer and the mental model of the user are compared and the differences are taken as a gap also reported as a part to be improved in the future. In the emotional factor approach, the emotional factors are extracted for the target products and evaluated in terms of the emotional factors. With this proposed method, we can evaluate the software products with customized attributes of the products and deduce the guidelines for the future improvements. We also takes the GUI framework as a sample case and extracts the directions for improvement. As this model analyzes tasks of users and uses evaluation factors for each task, it is capable of not only reflecting the characteristics of the product, but exactly identifying the items that should be modified and improved.

A Methodology to Develop a Curriculum of Landscape Architecture based on National Competency Standards (국가직무능력표준(NCS) 기반 조경분야 교육과정 개발)

  • Byeon, Jae-Sang;Shin, Sang-Hyun;Ahn, Seong-Ro
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.2
    • /
    • pp.23-39
    • /
    • 2017
  • This study began from the question, "is there a way to efficiently apply industrial demand in the university curriculum?" Research focused on how to actively accept and respond to the era of the NCS (National Competency Standards). In order to apply NCS to individual departments of the university, industrial personnel must positively participate to form a practical-level curriculum by the NCS, which can be linked to the work and qualifications. A valid procedure for developing a curriculum based on the NCS of this study is as follows: First, the university must select a specific classification of NCS considering the relevant industry outlook, the speciality of professors in the university, the relationship with regional industries and the prospects for future employment, and the need for industrial manpower. Second, departments must establish a type of human resource that compromises goals for the university education and the missions of the chosen NCS. In this process, a unique competency unit of the university that can support the basic or applied subjects should be added to the task model. Third, the task model based on the NCS should be completed through the verification of each competency unit considering the acceptance or rejection in the curriculum. Fourth, subjects in response to each competency units within the task model should be developed while considering time and credits according to university regulations. After this, a clear subject description of how to operate and evaluate the contents of the curriculum should be created. Fifth, a roadmap for determining the period of operating subjects for each semester or year should be built. This roadmap will become a basis for the competency achievement frame to decide upon the adoption of a Process Evaluation Qualification System. In order for the NCS to be successfully established within the university, a consensus on the necessity of the NCS should be preceded by professors, students and staff members. Unlike a traditional curriculum by professors, the student-oriented NCS curriculum is needed sufficient understanding and empathy for the many sacrifices and commitment of the members of the university.

Real Option Analysis to Value Government Risk Share Liability in BTO-a Projects (손익공유형 민간투자사업의 투자위험분담 가치 산정)

  • KU, Sukmo;LEE, Sunghoon;LEE, Seungjae
    • Journal of Korean Society of Transportation
    • /
    • v.35 no.4
    • /
    • pp.360-373
    • /
    • 2017
  • The BTO-a projects is the types, which has a demand risk among the type of PPP projects in Korea. When demand risk is realized, private investor encounters financial difficulties due to lower revenue than its expectation and the government may also have a problem in stable infrastructure operation. In this regards, the government has applied various risk sharing policies in response to demand risk. However, the amount of government's risk sharing is the government's contingent liabilities as a result of demand uncertainty, and it fails to be quantified by the conventional NPV method of expressing in the text of the concession agreement. The purpose of this study is to estimate the value of investment risk sharing by the government considering the demand risk in the profit sharing system (BTO-a) introduced in 2015 as one of the demand risk sharing policy. The investment risk sharing will take the form of options in finance. Private investors have the right to claim subsidies from the government when their revenue declines, while the government has the obligation to pay subsidies under certain conditions. In this study, we have established a methodology for estimating the value of investment risk sharing by using the Black - Scholes option pricing model and examined the appropriateness of the results through case studies. As a result of the analysis, the value of investment risk sharing is estimated to be 12 billion won, which is about 4% of the investment cost of the private investment. In other words, it can be seen that the government will invest 12 billion won in financial support by sharing the investment risk. The option value when assuming the traffic volume risk as a random variable from the case studies is derived as an average of 12.2 billion won and a standard deviation of 3.67 billion won. As a result of the cumulative distribution, the option value of the 90% probability interval will be determined within the range of 6.9 to 18.8 billion won. The method proposed in this study is expected to help government and private investors understand the better risk analysis and economic value of better for investment risk sharing under the uncertainty of future demand.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF