• Title/Summary/Keyword: Reduction of system order

Search Result 1,889, Processing Time 0.04 seconds

A Study on the existence aspect of the elderly in the Joseon Dynasty (조선시대 노인(老人)의 존재양상 - 연령과 신분을 중심으로 -)

  • Kim, Hyo-Gyong
    • Journal of Korean Historical Folklife
    • /
    • no.52
    • /
    • pp.7-46
    • /
    • 2017
  • The elderly in the Joseon Dynasty consistently attracted attention from the national herb as objects of social respect. Based on the Confucian ideology, the old man was considered to be a receiving body, since he was a person with complete character as a man. The elderly, who have the character of being a slave, transcended their status, and both the souls and the people were transcended beyond their identities and attributes and became objects of respect. The perception of the elderly is divided by age. The persons who are 50 years old and start to be in physical decline were regarded as senior citizens. However, this was just mentioned as an inflection point between the prime of manhood and senior citizens and was not defined as the elderly. As a public duty called a national work ends when they are 60 years old, the age is truly the lowest limit of senior citizens who are applicable to all the social beings. However, because their public duties end when they are 60 years old and they were regarded as general members of society, special benefits were not granted to them. In the caste system and bureaucratic society, senior citizens' treatment were differently done by age. For the senior citizens who are 70 years old, various benefits were just granted to high government officials. Bokho(復戶) and Seojeong were first given to them. And the retirement age of government officials was not specially set. It was done in the way to treat Jonno with exceptional respect by Chisa(致仕: regular retirement). It is the most respectful treatment given to high government officials and ministers. For the senior citizens who are 80 years old, Yangnoyeon(養老宴) was held for both of Yangmin and Cheonmin as an measure to treat them considerately. In addition, official ranks(官品) with social value were allowed by giving them Noinjik (老人職). Official ranks given to Seoin and Cheonin were the best Jonno(尊老) policy. However, the Jonno policy related to senior citizens was different according to position and official ranks as follows: Kings were subjected to social treatment when they were 60 years old. High government officials and royal relatives of the senior grade of the second court rank were subjected to social treatment when they were 70 years old. And general Seoin and slaves were subjected to social treatment when they were respectively 80 and 90 years old. Senior citizens were individually supported. However, social value was granted because the nation supervised it. As Bokho and Sijeong were assigned according to position and official ranks and kinds of things were different, the social limit was clearly shown. Social order was put above the ideology called Jonno thought. However, Jonno acts by age and position did not stay at the individual level and the nation took care of the senior citizens who are the members of society in various ways based on Jonno thought. Society tried to take care of the senior citizens who had difficulties in their activities because of being in physical decline. The nation increased the existence value of the senior citizens by giving things(賜物) including chairs, rice, meat, and ice economically, exoneration(免罪), the reduction system, and wergild legally, and Noinjik called Gaja(加資) socially to them and changing them to the members of society. Yangnoyeon and Gaja held targeting people of every class by transcending position and official ranks make the point that the senior citizens who are more than 80 years old are subject to social jonno clear. That is, the senior citizens were subject to respect for the elderly as the persons who were socially respected transcending their position when they got to be 80 years old.

Analysis of dose reduction of surrounding patients in Portable X-ray (Portable X-ray 검사 시 주변 환자 피폭선량 감소 방안 연구)

  • Choe, Deayeon;Ko, Seongjin;Kang, Sesik;Kim, Changsoo;Kim, Junghoon;Kim, Donghyun;Choe, Seokyoon
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.2
    • /
    • pp.113-120
    • /
    • 2013
  • Nowadays, the medical system towards patients changes into the medical services. As the human rights are improved and the capitalism is enlarged, the rights and needs of patients are gradually increasing. Also, based on this change, several systems in hospitals are revised according to the convenience and needs of patients. Thus, the cases of mobile portable among examinations are getting augmented. Because the number of mobile portable examinations in patient's room, intensive care unit, operating room and recovery room increases, neighboring patients are unnecessarily exposed to radiation so that the examination is legally regulated. Hospitals have to specify that "In case that the examination is taken out of the operating room, emergency room or intensive care units, the portable medical X-ray protective blocks should be set" in accordance with the standards of radiation protective facility in diagnostic radiological system. Some keep this regulation well, but mostly they do not keep. In this study, we shielded around the Collimator where the radiation is detected and then checked the change of dose regarding that of angles in portable tube and collimator before and after shielding. Moreover, we tried to figure out the effects of shielding on dose according to the distance change between patients' beds. As a result, the neighboring areas around the collimator are affected by the shielding. After shielding, the radiation is blocked 20% more than doing nothing. When doing the portable examination, the exposure doses are increased $0^{\circ}C$, $90^{\circ}C$ and $45^{\circ}C$ in order. At the time when the angle is set, the change of doses around the collimator decline after shielding. In addition, the exposure doses related to the distance of beds are less at 1m than 0.5m. In consideration of the shielding effects, putting the beds as far as possible is the best way to block the radiation, which is close to 100%. Next thing is shielding the collimator and its effect is about 20%, and it is more or less 10% by controlling the angles. When taking the portable examination, it is better to keep the patients and guardians far enough away to reduce the exposure doses. However, in case that the bed is fixed and the patient cannot move, it is suggested to shield around the collimator. Furthermore, $90^{\circ}C$ of collimator and tube is recommended. If it is not possible, the examination should be taken at $0^{\circ}C$ and $45^{\circ}C$ is better to be disallowed. The radiation-related workers should be aware of above results, and apply them to themselves in practice. Also, it is recommended to carry out researches and try hard to figure out the ways of reducing the exposure doses and shielding the radiation effectively.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

EFFECT OF OCTANOL, THE GAP JUNCTION BLOCKER, ON THE REGULATION OF FLUID SECRETION AND INTRACELLULAR CALCIUM CONCENTRATION IN SALIVARY ACINAR CELLS (흰쥐 악하선 세포에서 gap junction 봉쇄제인 octanol이 타액분비 및 세포내 $Ca^{2+}$ 농도 조절에 미치는 영향)

  • Lee, Ju-Seok;Seo, Jeong-Taeg;Lee, Syng-Il;Lee, Jong-Gap;Sohn, Heung-Kyu
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.26 no.2
    • /
    • pp.399-415
    • /
    • 1999
  • From bacteria to mammalian cells, one of the most important mediators of intracellular signal transduction mechanisms which regulate a variety of intracellular processes is free calcium. In salivary acinar cells, elevation of intracellular calcium concentration ($[Ca^{2+}]_i$) is essential for the salivary secretion induced by parasympathetic stimulation. However, in addition to $[Ca^{2+}]_i$, gap junctions which couple individual cells electrically and chemically have also been reported to regulate enzyme secretion in pancreatic acinar cells. Since the plasma membrane of salivary acinar cells has a high density of gap junctions, and these cells are electrically and chemically coupled with each other, gap junctions may modulate the secretory function of salivary glands. In this respect, I planned to investigate the role of gap junctions in the modulation of salivary secretion and $[Ca^{2+}]_i$, using mandibular salivary glands of rats. In order to measure the salivary flow rate, fluid was collected from the cannulated duct of the isolated perfused rat mandibular glands at 2 min intervals. $[Ca^{2+}]_i$, was measured from the cells loaded with fura-2 by spectrofluorometry. The results obtained were as follows: 1. CCh-induced salivary secretion was reversibly inhibited by 1 mM octanol, a gap junction blocker. 2. CCh-induced increase in $[Ca^{2+}]_i$, was also reversed by the application of 1 mM octanol. 3. Octanol did not block the initial increase in $[Ca^{2+}]_i$ caused by CCh, which suggested that the reduction of $[Ca^{2+}]_i$, caused by gap junction blockade was not resulted from the inhibition of $Ca^{2+}$ release from intracellular $Ca^{2+}$ stores. 4. Addition of octanol during stimulation with $1{\mu}M$ thapsigargin, a potent microsomal ATPase inhibitor, reduced $[Ca^{2+}]_i$, to the basal level. This suggested that inhibition of gap junction permeability closed plasma membrane $Ca^{2+}$ channels. 5. 2,5-di-tert-butyl-1,4-benzohydroquinone (TBQ) generated $[Ca^{2+}]_i$ oscillations resulting from periodic influx of $Ca^{2+}$ via plasma membrane. The TBQ-induced $[Ca^{2+}]_i$ oscillations were stopped by the application of 1mM octanol which implicated that gap junctions modulate the permeability of plasma membrane $Ca^{2+}$ channels. 6. Glycyrrhetinic acid, another well known gap junction blocker, also inhibited CCh-induced salivary secretion from rat mandibular glands. These results suggested that gap junctions play an important role in the modulation of fluid secretion from the rat mandibular glands and this was probably due to the inhibition of $Ca^{2+}$ influx through the plasma membrane $Ca^{2+}$ channels.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

Evaluation of HalcyonTM Fast kV CBCT effectiveness in radiation therapy in cervical cancer patients of childbearing age who performed ovarian transposition (난소전위술을 시행한 가임기 여성의 자궁경부암 방사선치료 시 난소선량 감소를 위한 HalcyonTM Fast kV CBCT의 유용성 평가 : Phantom study)

  • Lee Sung Jae;Shin Chung Hun;Choi So Young;Lee Dong Hyeong;Yoo Soon Mi;Song Heung Gwon;Yoon In Ha
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.34
    • /
    • pp.73-82
    • /
    • 2022
  • Purpose: The purpose of this study is to evaluate the effectiveness of reducing the absorbed dose to the ovaries and the quality of the CBCT image when using the HalcyonTM Fast kV CBCT of cervical cancer patients of child-bearing age who performed ovarian transposition Materials and Methods : Contouring of the cervix and ovaries required for measurement was performed on the computed tomography images of the human phantom (Alderson Rando Phantom, USA), and three Optically Stimulated Luminescence Dosimeter(OSLD) were attached to the selected organ cross-section, respectively. In order to measure the absorbed dose to the cervix and ovaries in the TruebeamTM pelvis mode (Hereinafter referred to as TP), The HalcyonTM Pelvis mode (Hereinafter referred to as HP) and The HalcyonTM Pelvis Fast mode (Hereinafter referred to as HPF), An image was taken with a scan range of 17.5 cm and also taken an image that reduced the Scan range to 12.5cm. A total of 10 cumulative doses were summed, It was replaced with a value of 23 Fx, the number of cervical cancer treatments, and compared In additon, uniformity, low contrast visibility, spatial resolution, and geometric distortion were compared and analyzed using Catphan 504 phantom to compare CBCT image quality between equipment. Each factor was repeatedly measured three times, and the average value was obtained by analysing with the Doselab (Mobius Medical Systems, LP. Versions: 6.8) program. Results: As a result of measuring absorbed dose by CBCT with OSLD, TP and HP did not obtain significant results under the same conditions. The mode showing the greatest reduction value was HPF versus TP. In HPF, the absorbed dose was reduced by 39.8% in the cervix and 19.8% in the ovary compared to the TP in the scan range of 17.5 cm. the scan range was reduced to 12.5 cm, absorbed dose was reduced by 34.2% in the cervix and 50.5% in the ovary. In addition, result of evaluating the quality of the image used in the above experiment, it complied with the equipment manufacturer's standards with Geometric Distortion within 1mm (SBRT standard), Uniformity HU, LCV within 2.0%, Spatial Resolution more than 3 lp/mm. Conclusion: According to the results of this experiment, HalcyonTM can select more various conditions than TruebeamTM in treatment of fertility woman who have undergone ovarian Transposition , because it is important to reduce the radiation dose by CBCT during radiation therapy. So finally we recommend HalcyonTM Fast kV CBCT which maintains image quality even at low mAs. However, it is consider that the additional exposure to low doses can be reduced by controlling the imaging range for patients who have undergone ovarian transposition in other treatment machines.

A Study on Forest Insurance (산림보험(山林保險)에 관한 연구(硏究))

  • Park, Tai Sik
    • Journal of Korean Society of Forest Science
    • /
    • v.15 no.1
    • /
    • pp.1-38
    • /
    • 1972
  • 1. Objective of the Study The objective of the study was to make fundamental suggestions for drawing a forest insurance system applicable in Korea by investigating forest insurance systems undertaken in foreign countries, analyzing the forest hazards occurred in entire forests of Korea in the past, and hearing the opinions of people engaged in forestry. 2. Methods of the Study First, reference studies on insurance at large as well as on forest insurance were intensively made to draw the characteristics of forest insurance practiced in main forestry countries, Second, the investigations of forest hazards in Korea for the past ten years were made with the help of the Office of Forestry. Third, the questionnaires concerning forest insurance were prepared and delivered at random to 533 personnel who are working at different administrative offices of forestry, forest stations, forest cooperatives, colleges and universities, research institutes, and fire insurance companies. Fourth, fifty three representative forest owners in the area of three forest types (coniferous, hardwood, and mixed forest), a representative region in Kyonggi Province out of fourteen collective forest development programs in Korea, were directly interviewed with the writer. 3. Results of the Study The rate of response to the questionnaire was 74.40% as shown in the table 3, and the results of the questionaire were as follows: (% in the parenthes shows the rates of response; shortages in amount to 100% were due to the facts of excluding the rates of response of minor respondents). 1) Necessity of forest insurance The respondents expressed their opinions that forest insurance must be undertaken to assure forest financing (5.65%); for receiving the reimbursement of replanting costs in case of damages done (35.87%); and to protect silvicultural investments (46.74%). 2) Law of forest insurance Few respondents showed their views in favor of applying the general insurance regulations to forest insurance practice (9.35%), but the majority of respondents were in favor of passing a special forest insurance law in the light of forest characteristics (88.26%). 3) Sorts of institutes to undertake forest insurance A few respondents believed that insurance companies at large could take care of forest insurance (17.42%); forest owner's mutual associations would manage the forest insurance more effectively (23.53%); but the more than half of the respondents were in favor of establishing public or national forest insurance institutes (56.18%). 4) Kinds of risks to be undertaken in forest insurance It would be desirable that the risks to be undertaken in forest insurance be limited: To forest fire hazards only (23.38%); to forest fire hazards plus damages made by weather (14.32%); to forest fire hazards, weather damages, and insect damages (60.68%). 5) Objectives to be insured It was responded that the objectives to be included in forest insurance should be limited: (1) To artificial coniferous forest only (13.47%); (2) to both coniferous and broad-leaved artificial forests (23.74%); (3) but the more than half of the respondents showed their desire that all the forests regardless of species and the methods of establishment should be insured (61.64%). 6) Range of risks in age of trees to be included in forest insurance The opinions of the respondents showed that it might be enough to insure the trees less than ten years of age (15.23%); but it would be more desirous of taking up forest trees under twenty years of age (32.95%); nevertheless, a large number of respondents were in favor of underwriting all the forest trees less than fourty years of age (46.37%). 7) Term of a forest insurance contract Quite a few respondents favored a contract made on one year basis (31.74%), but the more than half of the respondents favored the contract made on five year bases (58.68%). 8) Limitation in a forest insurance contract The respondents indicated that it would be desirable in a forest insurance contract to exclude forests less than five hectars (20.78%), but more than half of the respondents expressed their opinions that forests above a minimum volume or number of trees per unit area should be included in a forest insurance contract regardless of the area of forest lands (63.77%). 9) Methods of contract Some responded that it would be good to let the forest owners choose their forests in making a forest insurance contract (32.13%); others inclined to think that it would be desirable to include all the forests that owners hold whenerver they decide to make a forest insurance contract (33.48%); the rest responded in favor of forcing the owners to buy insurance policy if they own the forests that were established with subsidy or own highly vauable growing stock (31.92%) 10) Rate of premium The responses were divided into three categories: (1) The rate of primium is to be decided according to the regional degree of risks(27.72%); (2) to be decided by taking consideration both regional degree of risks and insurable values(31.59%); (3) and to be decided according to the rate of risks for the entire country and the insurable values (39.55%). 11) Payment of Premium Although a few respondents wished to make a payment of premium at once for a short term forest insurance contract, and an annual payment for a long term contract (13.80%); the majority of the respondents wished to pay the premium annually regardless of the term of contract, by employing a high rate of premium on a short term contract, but a low rate on a long term contract (83.71%). 12) Institutes in charge of forest insurance business A few respondents showed their desire that forest insurance be taken care of at the government forest administrative offices (18.75%); others at insurance companies (35.76%); but the rest, the largest number of the respondents, favored forest associations in the county. They also wanted to pay a certain rate of premium to the forest associations that issue the insurance (44.22%). 13) Limitation on indemnity for damages done In limitation on indemnity for damages done, the respondents showed a quite different views. Some desired compesation to cover replanting costs when young stands suffered damages and to be paid at the rate of eighty percent to the losses received when matured timber stands suffered damages(29.70%); others desired to receive compensation of the actual total loss valued at present market prices (31.07%); but the rest responded in favor of compensation at the present value figured out by applying a certain rate of prolongation factors to the establishment costs(36.99%). 14) Raising of funds for forest insurance A few respondents hoped to raise the fund for forest insurance by setting aside certain amount of money from the indemnity paid (15.65%); others wished to raise the fund by levying new forest land taxes(33.79%); but the rest expressed their hope to raise the fund by reserving certain amount of money from the surplus money that was saved due to the non-risks (44.81%). 15) Causes of fires The main causes of forest fires 6gured out by the respondents experience turned out to be (1) an accidental fire, (2) cigarettes, (3) shifting cultivation. The reponses were coincided with the forest fire analysis made by the Office of Forestry. 16) Fire prevention The respondents suggested that the most important and practical three kinds of forest fire prevention measures would be (1) providing a fire-break, (2) keeping passers-by out during the drought seasons, (3) enlightenment through mass communication systems. 4. Suggestions The writer wishes to present some suggestions that seemed helpful in drawing up a forest insurance system by reviewing the findings in the questionaire analysis and the results of investigations on forest insurance undertaken in foreign countries. 1) A forest insurance system designed to compensate the loss figured out on the basis of replanting cost when young forest stands suffered damages, and to strengthen credit rating by relieving of risks of damages, must be put in practice as soon as possible with the enactment of a specifically drawn forest insurance law. And the committee of forest insurance should be organized to make a full study of forest insurance system. 2) Two kinds of forest insurance organizations furnishing forest insurance, publicly-owned insurance organizations and privately-owned, are desirable in order to handle forest risks properly. The privately-owned forest insurance organizations should take up forest fire insurance only, and the publicly-owned ought to write insurance for forest fires and insect damages. 3) The privately-owned organizations furnishing forest insurance are desired to take up all the forest stands older than twenty years; whereas, the publicly-owned should sell forest insurance on artificially planted stands younger than twenty years with emphasis on compensating replanting costs of forest stands when they suffer damages. 4) Small forest stands, less than one hectare holding volume or stocked at smaller than standard per unit area are not to be included in a forest insurance writing, and the minimum term of insuring should not be longer than one year in the privately-owned forest insurance organizations although insuring period could be extended more than one year; whereas, consecutive five year term of insurance periods should be set as a mimimum period of insuring forest in the publicly-owned forest insurance organizations. 5) The forest owners should be free in selecting their forests in insuring; whereas, forest owners of the stands that were established with subsidy should be required to insure their forests at publicly-owned forest insurance organizations. 6) Annual insurance premiums for both publicly-owned and privately-owned forest insurance organizations ought to be figured out in proportion to the amount of insurance in accordance with the degree of risks which are grouped into three categories on the basis of the rate of risks throughout the country. 7) Annual premium should be paid at the beginning of forest insurance contract, but reduction must be made if the insuring periods extend longer than a minimum period of forest insurance set by the law. 8) The compensation for damages, the reimbursement, should be figured out on the basis of the ratio between the amount of insurance and insurable value. In the publicly-owned forest insurance system, the standard amount of insurance should be set on the basis of establishment costs in order to prevent over-compensation. 9) Forest insurance business is to be taken care of at the window of insurance com pnies when forest owners buy the privately-owned forest insurance, but the business of writing the publicly-owned forest insurance should be done through the forest cooperatives and certain portions of the premium be reimbursed to the forest cooperatives. 10) Forest insurance funds ought to be reserved by levying a property tax on forest lands. 11) In order to prevent forest damages, the forest owners should be required to report forest hazards immediately to the forest insurance organizations and the latter should bear the responsibility of taking preventive measures.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF