• Title/Summary/Keyword: Policy Research

Search Result 13,912, Processing Time 0.046 seconds

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

The Impact of Market Environments on Optimal Channel Strategy Involving an Internet Channel: A Game Theoretic Approach (시장 환경이 인터넷 경로를 포함한 다중 경로 관리에 미치는 영향에 관한 연구: 게임 이론적 접근방법)

  • Yoo, Weon-Sang
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.119-138
    • /
    • 2011
  • Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.

    shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
    shows various market conditions captured by the two consumer heterogeneities.
    (a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
    (c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition. summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
    summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.
    illustrates how this happens. When mangers consider the overall impact of the Internet channel, however, they should consider not only channel power, but also sales volume. When both are considered, the introduction of the Internet channel is revealed as more harmful to a physical retailer in Russia than one in Hong Kong, because the sales volume decrease for a physical store due to Internet channel competition is much greater in Russia than in Hong Kong. The results show that manufacturer is always better off with any type of Internet store introduction. The independent physical store benefits from opening its own Internet store when the average travel cost is higher relative to the disutility of using the Internet. Under an opposite market condition, however, the independent physical retailer could be worse off when it opens its own Internet outlet and coordinates both outlets (RI). This is because the low average travel cost significantly reduces the channel power of the independent physical retailer, further aggravating the already weak channel power caused by myopic inter-channel price coordination. The results implies that channel members and policy makers should explicitly consider the factors determining the relative distributions of both kinds of consumer disutility, when they make a channel decision involving an Internet channel. These factors include the suitability of a product for Internet shopping, the level of E-Commerce readiness of a market, and the degree of geographic dispersion of consumers in a market. Despite the academic contributions and managerial implications, this study is limited in the following ways. First, a series of numerical analyses were conducted to derive equilibrium solutions due to the complex forms of demand functions. In the process, we set up V=100, ${\lambda}$=1, and ${\beta}$=0.01. Future research may change this parameter value set to check the generalizability of this study. Second, the five different scenarios for market conditions were analyzed. Future research could try different sets of parameter ranges. Finally, the model setting allows only one monopoly manufacturer in the market. Accommodating competing multiple manufacturers (brands) would generate more realistic results.

  • PDF
  • A Study for Improvement of Nursing Service Administration (병원 간호행정 개선을 위한 연구)

    • 박정호
      • Journal of Korean Academy of Nursing
      • /
      • v.3 no.1
      • /
      • pp.13-40
      • /
      • 1972
    • Much has teed changed in the field of hospital administration in the It wake of the rapid development of sciences, techniques ana systematic hospital management. However, we still have a long way to go in organization, in the quality of hospital employees and hospital equipment and facilities, and in financial support in order to achieve proper hospital management. The above factors greatly effect the ability of hospitals to fulfill their obligation in patient care and nursing services. The purpose of this study is to determine the optimal methods of standardization and quality nursing so as to improve present nursing services through investigations and analyses of various problems concerning nursing administration. This study has been undertaken during the six month period from October 1971 to March 1972. The 41 comprehensive hospitals have been selected iron amongst the 139 in the whole country. These have been categorized according-to the specific purposes of their establishment, such as 7 university hospitals, 18 national or public hospitals, 12 religious hospitals and 4 enterprise ones. The following conclusions have been acquired thus far from information obtained through interviews with nursing directors who are in charge of the nursing administration in each hospital, and further investigations concerning the purposes of establishment, the organization, personnel arrangements, working conditions, practices of service, and budgets of the nursing service department. 1. The nursing administration along with its activities in this country has been uncritical1y adopted from that of the developed countries. It is necessary for us to re-establish a new medical and nursing system which is adequate for our social environments through continuous study and research. 2. The survey shows that the 7 university hospitals were chiefly concerned with education, medical care and research; the 18 national or public hospitals with medical care, public health and charity work; the 2 religious hospitals with medical care, charity and missionary works; and the 4 enterprise hospitals with public health, medical care and charity works. In general, the main purposes of the hospitals were those of charity organizations in the pursuit of medical care, education and public benefits. 3. The survey shows that in general hospital facilities rate 64 per cent and medical care 60 per-cent against a 100 per cent optimum basis in accordance with the medical treatment law and approved criteria for training hospitals. In these respects, university hospitals have achieved the highest standards, followed by religious ones, enterprise ones, and national or public ones in that order. 4. The ages of nursing directors range from 30 to 50. The level of education achieved by most of the directors is that of graduation from a nursing technical high school and a three year nursing junior college; a very few have graduated from college or have taken graduate courses. 5. As for the career tenure of nurses in the hospitals: one-third of the nurses, or 38 per cent, have worked less than one year; those in the category of one year to two represent 24 pet cent. This means that a total of 62 per cent of the career nurses have been practicing their profession for less than two years. Career nurses with over 5 years experience number only 16 per cent: therefore the efficiency of nursing services has been rated very low. 6. As for the standard of education of the nurses: 62 per cent of them have taken a three year course of nursing in junior colleges, and 22 per cent in nursing technical high schools. College graduate nurses come up to only 15 per cent; and those with graduate course only 0.4 per cent. This indicates that most of the nurses are front nursing technical high schools and three year nursing junior colleges. Accordingly, it is advisable that nursing services be divided according to their functions, such as professional, technical nurses and nurse's aides. 7. The survey also shows that the purpose of nursing service administration in the hospitals has been regulated in writing in 74 per cent of the hospitals and not regulated in writing in 26 per cent of the hospitals. The general purposes of nursing are as follows: patient care, assistance in medical care and education. The main purpose of these nursing services is to establish proper operational and personnel management which focus on in-service education. 8. The nursing service departments belong to the medical departments in almost 60 per cent of the hospitals. Even though the nursing service department is formally separated, about 24 per cent of the hospitals regard it as a functional unit in the medical department. Only 5 per cent of the hospitals keep the department as a separate one. To the contrary, approximately 12 per cent of the hospitals have not established a nursing service department at all but surbodinate it to the other department. In this respect, it is required that a new hospital organization be made to acknowledge the independent function of the nursing department. In 76 per cent of the hospitals they have advisory committees under the nursing department, such as a dormitory self·regulating committee, an in-service education committee and a nursing procedure and policy committee. 9. Personnel arrangement and working conditions of nurses 1) The ratio of nurses to patients is as follows: In university hospitals, 1 to 2.9 for hospitalized patients and 1 to 4.0 for out-patients; in religious hospitals, 1 to 2.3 for hospitalized patients and 1 to 5.4 for out-patients. Grouped together this indicates that one nurse covers 2.2 hospitalized patients and 4.3 out-patients on a daily basis. The current medical treatment law stipulates that one nurse should care for 2.5 hospitalized patients or 30.0 out-patients. Therefore the statistics indicate that nursing services are being peformed with an insufficient number of nurses to cover out-patients. The current law concerns the minimum number of nurses and disregards the required number of nurses for operation rooms, recovery rooms, delivery rooms, new-born baby rooms, central supply rooms and emergency rooms. Accordingly, tile medical treatment law has been requested to be amended. 2) The ratio of doctors to nurses: In university hospitals, the ratio is 1 to 1.1; in national of public hospitals, 1 to 0.8; in religious hospitals 1 to 0.5; and in private hospitals 1 to 0.7. The average ratio is 1 to 0.8; generally the ideal ratio is 3 to 1. Since the number of doctors working in hospitals has been recently increasing, the nursing services have consequently teen overloaded, sacrificing the services to the patients. 3) The ratio of nurses to clerical staff is 1 to 0.4. However, the ideal ratio is 5 to 1, that is, 1 to 0.2. This means that clerical personnel far outnumber the nursing staff. 4) The ratio of nurses to nurse's-aides; The average 2.5 to 1 indicates that most of the nursing service are delegated to nurse's-aides owing to the shortage of registered nurses. This is the main cause of the deterioration in the quality of nursing services. It is a real problem in the guest for better nursing services that certain hospitals employ a disproportionate number of nurse's-aides in order to meet financial requirements. 5) As for the working conditions, most of hospitals employ a three-shift day with 8 hours of duty each. However, certain hospitals still use two shifts a day. 6) As for the working environment, most of the hospitals lack welfare and hygienic facilities. 7) The salary basis is the highest in the private university hospitals, with enterprise hospitals next and religious hospitals and national or public ones lowest. 8) Method of employment is made through paper screening, and further that the appointment of nurses is conditional upon the favorable opinion of the nursing directors. 9) The unemployment ratio for one year in 1971 averaged 29 per cent. The reasons for unemployment indicate that the highest is because of marriage up to 40 per cent, and next is because of overseas employment. This high unemployment ratio further causes the deterioration of efficiency in nursing services and supplementary activities. The hospital authorities concerned should take this matter into a jeep consideration in order to reduce unemployment. 10) The importance of in-service education is well recognized and established. 1% has been noted that on the-job nurses. training has been most active, with nursing directors taking charge of the orientation programs of newly employed nurses. However, it is most necessary that a comprehensive study be made of instructors, contents and methods of education with a separate section for in-service education. 10. Nursing services'activities 1) Division of services and job descriptions are urgently required. 81 per rent of the hospitals keep written regulations of services in accordance with nursing service manuals. 19 per cent of the hospitals do not keep written regulations. Most of hospitals delegate to the nursing directors or certain supervisors the power of stipulating service regulations. In 21 per cent of the total hospitals they have policy committees, standardization committees and advisory committees to proceed with the stipulation of regulations. 2) Approximately 81 per cent of the hospitals have service channels in which directors, supervisors, head nurses and staff nurses perform their appropriate services according to the service plans and make up the service reports. In approximately 19 per cent of the hospitals the staff perform their nursing services without utilizing the above channels. 3) In the performance of nursing services, a ward manual is considered the most important one to be utilized in about 32 percent of hospitals. 25 per cent of hospitals indicate they use a kardex; 17 per cent use ward-rounding, and others take advantage of work sheets or coordination with other departments through conferences. 4) In about 78 per cent of hospitals they have records which indicate the status of personnel, and in 22 per cent they have not. 5) It has been advised that morale among nurses may be increased, ensuring more efficient services, by their being able to exchange opinions and views with each other. 6) The satisfactory performance of nursing services rely on the following factors to the degree indicated: approximately 32 per cent to the systematic nursing activities and services; 27 per cent to the head nurses ability for nursing diagnosis; 22 per cent to an effective supervisory system; 16 per cent to the hospital facilities and proper supply, and 3 per cent to effective in·service education. This means that nurses, supervisors, head nurses and directors play the most important roles in the performance of nursing services. 11. About 87 per cent of the hospitals do not have separate budgets for their nursing departments, and only 13 per cent of the hospitals have separate budgets. It is recommended that the planning and execution of the nursing administration be delegated to the pertinent administrators in order to bring about improved proved performances and activities in nursing services.

    • PDF

    Trends of Cancer Mortality in Gyeongsangbuk - do from 1991 to 1998 (경상북도 주민의 암사망 추이)

    • Kim, Byung-Guk;Lee, Sung-Kook;Kim, Tea-Woong;Lee, Do-Young;Lee, Kyeong-Soo
      • Journal of agricultural medicine and community health
      • /
      • v.26 no.2
      • /
      • pp.59-78
      • /
      • 2001
    • Data on reported cancer mortality in the Gyeongsangbuk- do province from 1991 to 1998 were collected and analyzed using the existing mortality reporting system as well as the public health network to furnish accurate data on reported cancer death and to collect data to establish a high quality district health plan. The overall crude death rate in Gyeongsangbuk province in 1991 was 74.56 deaths per 100,000-person but this rate increased to 79.22 in 1998. Among the deaths, the overall death rate of cancer was 16.7% in 1991, which increased to 19.3% in 1998; specifically the death rate of men increased from 19.4% in 1991 to 22.3% in 1998 while that of women increased from 12.4% in 1991 to 15.5% in 1998, showing a more increase among women. The types of cancer and associated death rates in 1991 were gastric cancer(41.5%), followed by liver cancer (28.8%), and lung and bronchogenic carcinoma(8.7%) and in 1998, gastric cancer (24.7%), followed by liver cancer(22.7%), lung and bronchogenic carcinoma(19.3%), showing the same order. For men and women, gastric cancer(40.2% and 44.7%, respectively) was the most common cancer death, followed by liver cancer(33.7% and 16.7%, respectively), and lung and bronchogenic carcinoma(10.2% and 5.0%, respectively) in 1991. However, in 1998, gastric cancer(27.8%) was still the most common type among both men and women, followed by liver cancer (18.5%) and lung and bronchogenic carcinoma(12.7%), showing the most decrease in gastric cancer but most increase in lung and bronchogenic carcinoma. The age- adjusted mortality rates by gastric cancer, hepatoma, laryngeal carcinoma were decreased in both male and female, and also uterine cancer was decreased in female. The age- adjusted mortality rates by lung and bronchogenic carcinoma, pancreatic cancer, rectal cancer were increased in both male and female, and also breast cancer was increased in female. The calculated overall age-adjusted death rate based on the 1995 population was 84.25 in 1991, which decreased to 77.67 in 1998. Male death rate decreased significantly from 119.81 in 1991 to 101.82 in 1998 while the female death rate increased from 48.64 in 1991 to 53.80 in 1998. A census of cancer death rate using accurate death records is important for the establishment of proper and high-quality district health and medical plan and policy. The effort to improve the accuracy of death reports using the health facility network, as had been attempted by this study, can be continued. Furthermore, there must be a way for the Health and Welfare Department to use the death reports to improve the present reporting system. Lastly, additional studies need to be conducted to investigate how much the accuracy was improved by the supplemented death reports in this study.

    • PDF

    Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

    • Ko, Eunjung;Kim, Namgyu
      • Journal of Intelligence and Information Systems
      • /
      • v.24 no.2
      • /
      • pp.125-148
      • /
      • 2018
    • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

    Change of Green Space Arrangement and Planting Structure of Apartment Complexes in Seoul (서울시 아파트단지의 녹지배치 및 식재구조 변화 연구)

    • Lee, Dong-Wook;Lee, Kyong-Jae;Han, Bong-Ho;Jang, Jae-Hoon;Kim, Jong-Yup
      • Journal of the Korean Institute of Landscape Architecture
      • /
      • v.40 no.4
      • /
      • pp.1-17
      • /
      • 2012
    • This study was carried out to propose the improved method by analyzing the change of green space arrangement and planting structure of apartment complexes in Seoul. 12 survey sites, which have obvious differences, were selected by reflecting the change of floor area ratio, underground parking place, and green space ratio. We divided the survey sites into four types that high green ratio(over 40%) apartment on natural ground, low green ratio(under 40%) apartment on natural ground, low green ratio(under 40%) apartment on artificial ground, and high green ratio(over 40%) apartment on artificial ground each period based on green space ratio and ground structure, plant crown volume, planting density, and planting pattern. The main factors of change of green space arrangement were green space ratio and ground structure. The Green space ratio was changed by the floor area ratio with constructing underground parking place and floor area ratio was adjusted by government policy and economic status. Average width of front green area has been changed from 10.0m in high green ratio apartment on natural ground for 3.5m, 2.7m, and 4.5m each period. The average width of the buffer green area has been changed from 15.0m in high green ratio apartment on natural ground of 7.7m, and 2.7m by extending parking place in the low green ratio apartment of artificial ground, so buffer green areas have been reduced and disconnected. So buffer green area in apartment complexes has been extended that the average width of the buffer green area was 3.8m caused by growing recognition of green since 2001. The ratio of native plant in canopy layer was increased from 45.1 % in the case of the high green ratio apartment of natural ground in 1980~1983 to 55.6%. Average plant crown volume increased from $1.27m^3/m^2$ in high green ratio apartment on natural ground for $3.47m^3/m^2$ in a low green ratio apartment on natural ground. But average plant crown volume is $0.27m^3/m^2$ in the high green ratio apartment of the artificial ground plant density of canopy layer was changed from 5 individuals per $100m^2$ to 14.5 individuals per $100m^2$. We should construct the buffer green area with natural ground and get the function of ecological and beautiful environment regarding to garden concept in case of front green area, width 4.5m. We should get the function of increasing green volume by multi-layer planting with shade woody species and flower woody species in case of back-side green area, width over 5.0m. We should get the function of covering the wall and increasing green landscape by planting with high woody species in case of side green area. We should apply the ecological planting technique to buffer green area and connect buffer green area to inner green area in apartment complexes.

    Quantitative Analysis of Carbohydrate, Protein, and Oil Contents of Korean Foods Using Near-Infrared Reflectance Spectroscopy (근적외 분광분석법을 이용한 국내 유통 식품 함유 탄수화물, 단백질 및 지방의 정량 분석)

    • Song, Lee-Seul;Kim, Young-Hak;Kim, Gi-Ppeum;Ahn, Kyung-Geun;Hwang, Young-Sun;Kang, In-Kyu;Yoon, Sung-Won;Lee, Junsoo;Shin, Ki-Yong;Lee, Woo-Young;Cho, Young Sook;Choung, Myoung-Gun
      • Journal of the Korean Society of Food Science and Nutrition
      • /
      • v.43 no.3
      • /
      • pp.425-430
      • /
      • 2014
    • Foods contain various nutrients such as carbohydrates, protein, oil, vitamins, and minerals. Among them, carbohydrates, protein, and oil are the main constituents of foods. Usually, these constituents are analyzed by the Kjeldahl and Soxhlet method and so on. However, these analytical methods are complex, costly, and time-consuming. Thus, this study aimed to rapidly and effectively analyze carbohydrate, protein, and oil contents with near-infrared reflectance spectroscopy (NIRS). A total of 517 food samples were measured within the wavelength range of 400 to 2,500 nm. Exactly 412 food calibration samples and 162 validation samples were used for NIRS equation development and validation, respectively. In the NIRS equation of carbohydrates, the most accurate equation was obtained under 1, 4, 5, 1 (1st derivative, 4 nm gap, 5 points smoothing, and 1 point second smoothing) math treatment conditions using the weighted MSC (multiplicative scatter correction) scatter correction method with MPLS (modified partial least square) regression. In the case of protein and oil, the best equation were obtained under 2, 5, 5, 3 and 1, 1, 1, 1 conditions, respectively, using standard MSC and standard normal variate only scatter correction methods with MPLS regression. Calibrations of these NIRS equations showed a very high coefficient of determination in calibration ($R^2$: carbohydrates, 0.971; protein, 0.974; oil, 0.937) and low standard error of calibration (carbohydrates, 4.066; protein, 1.080; oil, 1.890). Optimal equation conditions were applied to a validation set of 162 samples. Validation results of these NIRS equations showed a very high coefficient of determination in prediction ($r^2$: carbohydrates, 0.987; protein, 0.970; oil, 0.947) and low standard error of prediction (carbohydrates, 2.515; protein, 1.144; oil, 1.370). Therefore, these NIRS equations can be applicable for determination of carbohydrates, proteins, and oil contents in various foods.

    A Study on the Regional Characteristics of Broadband Internet Termination by Coupling Type using Spatial Information based Clustering (공간정보기반 클러스터링을 이용한 초고속인터넷 결합유형별 해지의 지역별 특성연구)

    • Park, Janghyuk;Park, Sangun;Kim, Wooju
      • Journal of Intelligence and Information Systems
      • /
      • v.23 no.3
      • /
      • pp.45-67
      • /
      • 2017
    • According to the Internet Usage Research performed in 2016, the number of internet users and the internet usage have been increasing. Smartphone, compared to the computer, is taking a more dominant role as an internet access device. As the number of smart devices have been increasing, some views that the demand on high-speed internet will decrease; however, Despite the increase in smart devices, the high-speed Internet market is expected to slightly increase for a while due to the speedup of Giga Internet and the growth of the IoT market. As the broadband Internet market saturates, telecom operators are over-competing to win new customers, but if they know the cause of customer exit, it is expected to reduce marketing costs by more effective marketing. In this study, we analyzed the relationship between the cancellation rates of telecommunication products and the factors affecting them by combining the data of 3 cities, Anyang, Gunpo, and Uiwang owned by a telecommunication company with the regional data from KOSIS(Korean Statistical Information Service). Especially, we focused on the assumption that the neighboring areas affect the distribution of the cancellation rates by coupling type, so we conducted spatial cluster analysis on the 3 types of cancellation rates of each region using the spatial analysis tool, SatScan, and analyzed the various relationships between the cancellation rates and the regional data. In the analysis phase, we first summarized the characteristics of the clusters derived by combining spatial information and the cancellation data. Next, based on the results of the cluster analysis, Variance analysis, Correlation analysis, and regression analysis were used to analyze the relationship between the cancellation rates data and regional data. Based on the results of analysis, we proposed appropriate marketing methods according to the region. Unlike previous studies on regional characteristics analysis, In this study has academic differentiation in that it performs clustering based on spatial information so that the regions with similar cancellation types on adjacent regions. In addition, there have been few studies considering the regional characteristics in the previous study on the determinants of subscription to high-speed Internet services, In this study, we tried to analyze the relationship between the clusters and the regional characteristics data, assuming that there are different factors depending on the region. In this study, we tried to get more efficient marketing method considering the characteristics of each region in the new subscription and customer management in high-speed internet. As a result of analysis of variance, it was confirmed that there were significant differences in regional characteristics among the clusters, Correlation analysis shows that there is a stronger correlation the clusters than all region. and Regression analysis was used to analyze the relationship between the cancellation rate and the regional characteristics. As a result, we found that there is a difference in the cancellation rate depending on the regional characteristics, and it is possible to target differentiated marketing each region. As the biggest limitation of this study and it was difficult to obtain enough data to carry out the analyze. In particular, it is difficult to find the variables that represent the regional characteristics in the Dong unit. In other words, most of the data was disclosed to the city rather than the Dong unit, so it was limited to analyze it in detail. The data such as income, card usage information and telecommunications company policies or characteristics that could affect its cause are not available at that time. The most urgent part for a more sophisticated analysis is to obtain the Dong unit data for the regional characteristics. Direction of the next studies be target marketing based on the results. It is also meaningful to analyze the effect of marketing by comparing and analyzing the difference of results before and after target marketing. It is also effective to use clusters based on new subscription data as well as cancellation data.

    Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

    • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
      • Journal of Intelligence and Information Systems
      • /
      • v.26 no.2
      • /
      • pp.105-129
      • /
      • 2020
    • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

    A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

    • Park, Do-Hyung
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.1
      • /
      • pp.35-55
      • /
      • 2013
    • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.