• Title/Summary/Keyword: Historical Cost

Search Result 252, Processing Time 0.022 seconds

Unstable vivax malaria in Korea

  • Ree, Han-Il
    • Parasites, Hosts and Diseases
    • /
    • v.38 no.3
    • /
    • pp.119-138
    • /
    • 2000
  • Korean vivax malaria had been prevalent for longtime throughout the country with low endemicity. As a result of the Korean war (1950-1953), malaria became epidemic. In 1959-1969 when the National Malaria Eradication Service (NMES) was implemented, malaria rates declined, with low endemicity in the south-west and south plain areas and high endemic foci in north Kyongsangbuk-do (province) and north and east Kyonggi-do. NMES activities greatly contributed in accelerating the control and later eradication of malaria. The Republic of Korea (South Korea) was designated malaria free in 1979. However, malaria re-emerged in 1993 and an outbreak occurred in north Kyonggi-do and north-west Kangwon-do (in and/or near the Demilitarized Zone, DMZ) , bordering North Korea. It has been postulated that most of the malaria cases resulted from bites of sporozoite-infected females of An. sinensis dispersed from North Korea across the DMZ. Judging from epidemiological and socio-ecological factors, vivax malaria would not be possible to be endemic in South Korea. Historical data show that vivax malaria in Korea is a typical unstable malaria. Epidemics may occur when environmental, socio-economical, and/or political factors change in favor to malaria transmission, and when such factors change to normal conditions malaria rates become low and may disappear. Passive case detection is a most feasible and recommendable control measure against the unstable vivax malaria in Korea in cost-effect point of view.

  • PDF

Will You Buy It Now?: Predicting Passengers that Purchase Premium Promotions Using the PAX Model

  • Al Emadi, Noora;Thirumuruganathan, Saravanan;Robillos, Dianne Ramirez;Jansen, Bernard Jim
    • Journal of Smart Tourism
    • /
    • v.1 no.1
    • /
    • pp.53-64
    • /
    • 2021
  • Upselling is often a critical factor in revenue generation for businesses in the tourism and travel industry. Utilizing passenger data from a major international airline company, we develop the PAX (Passenger, Airline, eXternal) model to predict passengers that are most likely to accept an upgrade offer from economy to premium. Formulating the problem as an extremely unbalanced, cost-sensitive, supervised binary classification, we predict if a customer will take an upgrade offer. We use a feature vector created from the historical data of 3 million passenger records from 2017 to 2019, in which passengers received approximately 635,000 upgrade offers worth more than $422,000,000 U.S. dollars. The model has an F1-score of 0.75, outperforming the airline's current rule-based approach. Findings have several practical applications, including identifying promising customers for upselling and minimizing the number of indiscriminate emails sent to customers. Accurately identifying the few customers who will react positively to upgrade offers is of paramount importance given the airline 'industry's razor-thin margins. Research results have significant real-world impacts because there is the potential to improve targeted upselling to customers in the airline and related industries.

Fault Diagnosis Method based on Feature Residual Values for Industrial Rotor Machines

  • Kim, Donghwan;Kim, Younhwan;Jung, Joon-Ha;Sohn, Seokman
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.4 no.2
    • /
    • pp.89-99
    • /
    • 2018
  • Downtime and malfunction of industrial rotor machines represents a crucial cost burden and productivity loss. Fault diagnosis of this equipment has recently been carried out to detect their fault(s) and cause(s) by using fault classification methods. However, these methods are of limited use in detecting rotor faults because of their hypersensitivity to unexpected and different equipment conditions individually. These limitations tend to affect the accuracy of fault classification since fault-related features calculated from vibration signal are moved to other regions or changed. To improve the limited diagnosis accuracy of existing methods, we propose a new approach for fault diagnosis of rotor machines based on the model generated by supervised learning. Our work is based on feature residual values from vibration signals as fault indices. Our diagnostic model is a robust and flexible process that, once learned from historical data only one time, allows it to apply to different target systems without optimization of algorithms. The performance of the proposed method was evaluated by comparing its results with conventional methods for fault diagnosis of rotor machines. The experimental results show that the proposed method can be used to achieve better fault diagnosis, even when applied to systems with different normal-state signals, scales, and structures, without tuning or the use of a complementary algorithm. The effectiveness of the method was assessed by simulation using various rotor machine models.

On the Theoretical Solution and Application to Container Loading Problem using Normal Distribution Based Model (정규 분포 모델을 이용한 화물 적재 문제의 이론적 해법 도출 및 활용)

  • Seung Hwan Jung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.240-246
    • /
    • 2022
  • This paper introduces a container loading problem and proposes a theoretical approach that efficiently solves it. The problem is to determine a proper weight of products loaded on a container that is delivered by third party logistics (3PL) providers. When the company pre-loads products into a container, typically one or two days in advance of its delivery date, various truck weights of 3PL providers and unpredictability of the randomness make it difficult for the company to meet the total weight regulation. Such a randomness is mainly due to physical difference of trucks, fuel level, and personalized equipment/belongings, etc. This paper provides a theoretical methodology that uses historical shipping data to deal with the randomness. The problem is formulated as a stochastic optimization where the truck randomness is reflected by a theoretical distribution. The data analytics solution of the problem is derived, which can be easily applied in practice. Experiments using practical data reveal that the suggested approach results in a significant cost reduction, compared to a simple average heuristic method. This study provides new aspects of the container loading problem and the efficient solving approach, which can be widely applied in diverse industries using 3PL providers.

Three dimensional GPR survey for the exploration of old remains at Buyeo area (부여지역 유적지 발굴을 위한 3차원 GPR 탐사)

  • Kim Jung-Bo;Son Jeong-Sul;Yi Myeong-Jong;Lim Seong-Keun;Cho Seong-Jun;Jeong Ji-Min;Park Sam-Gyu
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.49-69
    • /
    • 2004
  • One of the important roles of geophysical exploration in archeological survey may be to provide the subsurface information for effective and systematic excavations of historical remains. Ground Penetrating Radar (GPA) can give us images of shallow subsurface structure with high resolution and is regarded as a useful and important technology in archeological exploration. Since the buried cultural relics are the three-dimensional (3-D) objects in nature, the 3-D or areal survey is more desirable in archeological exploration. 3-D GPR survey based on the very dense data in principle, however, might need much higher cost and longer time of exploration than the other geophysical methods, thus it could have not been applied to the wide area exploration as one of routine procedures. Therefore, it is important to develop an effective way of 3-D GPR survey. In this study, we applied 3-D GPR method to investigate the possible historical remains of Baekje Kingdom at Gatap-Ri, Buyeo city, prior to the excavation. The principal purpose of the investigation was to provide the subsurface images of high resolution for the excavation of the surveyed area. Besides this, another purpose was to investigate the applicability and effectiveness of the continuous data acquisition system which was newly devised for the archeological investigation. The system consists of two sets of GPR antennas and the precise measurement device tracking the path of GPR antenna movement automatically and continuously Besides this hardware system, we adopted a concept of data acquisition that the data were acquired arbitrary not along the pre-established profile lines, because establishing the many profile lines itself would make the field work much longer, which results in the higher cost of field work. Owing to the newly devised system, we could acquire 3-D GPR data of an wide area over about $17,000 m^2$ as a result of the just two-days field work. Although the 3-D GPR data were gathered randomly not along the pre-established profile lines, we could have the 3-D images with high resolution showing many distinctive anomalies which could be interpreted as old agricultural lands, waterways, and artificial structures or remains. This case history led us to the conclusion that 3-D GPR method can be used easily not only to examine a small anomalous area but also to investigate the wider region of archeological interests. We expect that the 3-D GPR method will be applied as a one of standard exploration procedures to the exploration of historical remains in Korea in the near future.

  • PDF

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

A Probabilistic Risk-based Cost Estimation Model for Initial-Stage Decision Making on Apartment Remolding Projects (공동주택 리모델링 초기 단계 의사결정을 위한 확률론적 리스크 기반 비용 예측 모델 개발)

  • Lee, Dong-gun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.17 no.2
    • /
    • pp.70-79
    • /
    • 2016
  • The current remodeling cost estimation process is not only dependent on the historical data of new building construction, but it also has a poor linkage with risk-based estimation approach. As such, there is a high risk of falling short of initial budget. To overcome this, a risk-based estimation approach is necessary by providing a probabilistic estimation in consideration of the potential risk factors in conducting the remodeling projects. In addition, the decision-making process should be linked with the risk-based estimation results in stead of intuitive and/or experience-based estimation. This study provides a probabilistic estimation process for residential remodeling projects by developing a detailed methodology in which a step-by-step approach can be achieved. The new proposed estimation approach can help in decision-making for remodeling projects in terms of whether to proceed or not, by effectively reflecting the potential risk factors in the early stage of the project. In addition, the study can enhance the reliability of the estimation results by developing a sustainable estimation process model where a risk-based evaluation can be accomplished by setting up the cost-risk relationship database structure.

A Investigation and Study on the Farm Mechanization in Korea (우리나라 농업기계화에 관한 조사연구)

  • 최재갑
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.13 no.3
    • /
    • pp.2349-2371
    • /
    • 1971
  • 1. The historical development of the agriculture in Korea is observed and the future of Korean agriculture is suggested with present situation in order to recommend a direction of policy in agricultural mechanization. 2. A factor analysis of agricultural mechanization The needs of agricultural mechanization in the view of both national need and the armer's desire under the present situation are analyzed with data from the various sources. The researcher found that the agricultural mechanization is badly needed to develop prospective Korean agriculture to future. 3. The direction of agricultural mechanization. It can be said that the position of agriculture in the national economy plays a very important role. This importance should not be ignored by the Politicians in their process of developing long range economy plan. The agricultural mechanization for the modernized Korean agriculture should be directed to increase the most effective results with minimize the least sacrifice. The merry tiller is recommended to the main agricultural machinery in Korea in order to meet its small farming operation un-its(or farm size). Tractor is recommended in the plain area for the crop cultivation. The cooperative cultivation for rice and the upland crops will be developed in the plain area. Tractor, therefore, is recommended for the main agricultural machinery in these areas. Either tractor or merry tiller is recommended to the orchard area by its operating size of the orchard. The researcher also disoussed about the development of animal husbandry on the farm with increasing the farm size in order to develop meadow and pasture nuder the consideration of both the improvement of food consumption and the comprehensive development of national resources. 4. Relationship between the Performance of various agricultural machinery and the economic scale. Because of the agricultural machinery needs an expensive fixed expense(fixed cost) the total expense Per ha of the fixed expense and the operation expense should less than the traditional expense Per Dan Bo with in creased corpgiclds Per Dan Bo. Since the anual fixed expense of the agricultural machinery is figured out by the durability the more the farm size the less fixed expense of machinery is required. The formula of this principle is as follows; fixed expense for Dan Bo=Fixed expense of agricultural machinery farm size(or farming scale) The breaking-even point is the balance point between the expense of the using agricultural machinery and the traditional farming expenses. Labor cost of the Dan Bo is increasing when the management scale increases by the tradititional farming while the machanized management decrease the management cost Per Dan Bo. The reseracher found that the distribution of agricultural machinery will be the adeventeous after the year of 1981 by the result of frguring out the breaking-even point. 5. The Investigate and the conclusion. The purpose of this study is found out the direction of agricultural machanization and the breaking-even point of various agricultural machinery, there for is found out effective of the using various agricultural machinery for Collection cutter, Binder, Footing thresser, Semi-power thresser, Power thresser, Combine, Power rice-Trans-Planter, etc.

  • PDF

A Methodology of Seismic Damage Assessment Using Capacity Spectrum Method (능력 스펙트럼법을 이용한 건물 지진 손실 평가 방법)

  • Byeon, Ji-Seok
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.9 no.3 s.43
    • /
    • pp.1-8
    • /
    • 2005
  • This paper describes a new objective methodology of seismic building damage assessment which is called Advanced Component Method(ACM). ACM is a major attempt to replace the conventional loss estimation procedure, which is based on subjective measures and the opinions of experts, with one that objectively measures both earthquake intensity and the response ol buildings. First, response of typical buildings is obtained analytically by nonlinear seismic static analysis, push-over analyses. The spectral displacement Is used as a measure of earthquake intensity in order to use Capacity Spectrum Method and the damage functions for each building component, both structural and non-structural, are developed as a function of component deformation. Examples of components Include columns, beams, floors, partitions, glazing, etc. A repair/replacement cost model is developed that maps the physical damage to monetary damage for each component. Finally, building response, component damage functions, and cost model were combined probabilistically, using Wonte Carlo simulation techniques, to develop the final damage functions for each building type. Uncertainties in building response resulting from variability in material properties and load assumptions were incorporated in the Latin Hypercube sampling technique. The paper also presents and compares ACM and conventional building loss estimation based on historical damage data and reported loss data.

Intensity Based Stereo Matching Algorithm Including Boundary Information (경계선 영역 정보를 이용한 밝기값 기반 스테레오 정합)

  • Choi, Dong-Jun;Kim, Do-Hyun;Yang, Yeong-Yil
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.84-92
    • /
    • 1998
  • In this paper, we propose the novel cost functions for finding the disparity between the left and the right images in the stereo matching problem. The dynamic programming method is used in solving the stereo matching problem by Cox et al[10]. In the reference[10], only the intensity of the pixels in the epipolar line is used as the cost functions to find the corresponding pixels. We propose the two new cost functions. The information of the slope of the pixel is introduced to the constraints in determining the weights of intensity and direction(the historical information). The pixels with the higher slope are matched mainly by the intensity of pixels. As the slope becomes lower, the matching is performed mainly by the direction. Secondly, the disparity information of the previous epipolar line the pixel is used to find the disparity of the current epipolar line. If the pixel in the left epipolar line, $p-i$ and the pixel in the right epipolar line, $p-j$ satisfy the following conditions, the higher matching probability is given to the pixels, $p-i$ and $p-j$. i) The pixels, $p-i$ and $p-j$ are the pixles on the edges in the left and the right images, respectively. ⅱ) For the pixels $p-k$ and $p-l$ in the previous epipolar line, $p-k$and $p-l$ are matched and are the pixels on the same edge with $p-i$ and $p-j$, respectively. The proposed method compared with the original method[10] finds the better matching results for the test images.

  • PDF