• Title/Summary/Keyword: errors

Search Result 12,694, Processing Time 0.194 seconds

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

A Data-based Sales Forecasting Support System for New Businesses (데이터기반의 신규 사업 매출추정방법 연구: 지능형 사업평가 시스템을 중심으로)

  • Jun, Seung-Pyo;Sung, Tae-Eung;Choi, San
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.1-22
    • /
    • 2017
  • Analysis of future business or investment opportunities, such as business feasibility analysis and company or technology valuation, necessitate objective estimation on the relevant market and expected sales. While there are various ways to classify the estimation methods of these new sales or market size, they can be broadly divided into top-down and bottom-up approaches by benchmark references. Both methods, however, require a lot of resources and time. Therefore, we propose a data-based intelligent demand forecasting system to support evaluation of new business. This study focuses on analogical forecasting, one of the traditional quantitative forecasting methods, to develop sales forecasting intelligence systems for new businesses. Instead of simply estimating sales for a few years, we hereby propose a method of estimating the sales of new businesses by using the initial sales and the sales growth rate of similar companies. To demonstrate the appropriateness of this method, it is examined whether the sales performance of recently established companies in the same industry category in Korea can be utilized as a reference variable for the analogical forecasting. In this study, we examined whether the phenomenon of "mean reversion" was observed in the sales of start-up companies in order to identify errors in estimating sales of new businesses based on industry sales growth rate and whether the differences in business environment resulting from the different timing of business launch affects growth rate. We also conducted analyses of variance (ANOVA) and latent growth model (LGM) to identify differences in sales growth rates by industry category. Based on the results, we proposed industry-specific range and linear forecasting models. This study analyzed the sales of only 150,000 start-up companies in Korea in the last 10 years, and identified that the average growth rate of start-ups in Korea is higher than the industry average in the first few years, but it shortly shows the phenomenon of mean-reversion. In addition, although the start-up founding juncture affects the sales growth rate, it is not high significantly and the sales growth rate can be different according to the industry classification. Utilizing both this phenomenon and the performance of start-up companies in relevant industries, we have proposed two models of new business sales based on the sales growth rate. The method proposed in this study makes it possible to objectively and quickly estimate the sales of new business by industry, and it is expected to provide reference information to judge whether sales estimated by other methods (top-down/bottom-up approach) pass the bounds from ordinary cases in relevant industry. In particular, the results of this study can be practically used as useful reference information for business feasibility analysis or technical valuation for entering new business. When using the existing top-down method, it can be used to set the range of market size or market share. As well, when using the bottom-up method, the estimation period may be set in accordance of the mean reverting period information for the growth rate. The two models proposed in this study will enable rapid and objective sales estimation of new businesses, and are expected to improve the efficiency of business feasibility analysis and technology valuation process by developing intelligent information system. In academic perspectives, it is a very important discovery that the phenomenon of 'mean reversion' is found among start-up companies out of general small-and-medium enterprises (SMEs) as well as stable companies such as listed companies. In particular, there exists the significance of this study in that over the large-scale data the mean reverting phenomenon of the start-up firms' sales growth rate is different from that of the listed companies, and that there is a difference in each industry. If a linear model, which is useful for estimating the sales of a specific company, is highly likely to be utilized in practical aspects, it can be explained that the range model, which can be used for the estimation method of the sales of the unspecified firms, is highly likely to be used in political aspects. It implies that when analyzing the business activities and performance of a specific industry group or enterprise group there is political usability in that the range model enables to provide references and compare them by data based start-up sales forecasting system.

Clinical Usefulness of Implanted Fiducial Markers for Hypofractionated Radiotherapy of Prostate Cancer (전립선암의 소분할 방사선치료 시에 위치표지자 삽입의 유용성)

  • Choi, Young-Min;Ahn, Sung-Hwan;Lee, Hyung-Sik;Hur, Won-Joo;Yoon, Jin-Han;Kim, Tae-Hyo;Kim, Soo-Dong;Yun, Seong-Guk
    • Radiation Oncology Journal
    • /
    • v.29 no.2
    • /
    • pp.91-98
    • /
    • 2011
  • Purpose: To assess the usefulness of implanted fiducial markers in the setup of hypofractionated radiotherapy for prostate cancer patients by comparing a fiducial marker matched setup with a pelvic bone match. Materials and Methods: Four prostate cancer patients treated with definitive hypofractionated radiotherapy between September 2009 and August 2010 were enrolled in this study. Three gold fiducial markers were implanted into the prostate and through the rectum under ultrasound guidance around a week before radiotherapy. Glycerin enemas were given prior to each radiotherapy planning CT and every radiotherapy session. Hypofractionated radiotherapy was planned for a total dose of 59.5 Gy in daily 3.5 Gy with using the Novalis system. Orthogonal kV X-rays were taken before radiotherapy. Treatment positions were adjusted according to the results from the fusion of the fiducial markers on digitally reconstructed radiographs of a radiotherapy plan with those on orthogonal kV X-rays. When the difference in the coordinates from the fiducial marker fusion was less than 1 mm, the patient position was approved for radiotherapy. A virtual bone matching was carried out at the fiducial marker matched position, and then a setup difference between the fiducial marker matching and bone matching was evaluated. Results: Three patients received a planned 17-fractionated radiotherapy and the rest underwent 16 fractionations. The setup error of the fiducial marker matching was $0.94{\pm}0.62$ mm (range, 0.09 to 3.01 mm; median, 0.81 mm), and the means of the lateral, craniocaudal, and anteroposterior errors were $0.39{\pm}0.34$ mm, $0.46{\pm}0.34$ mm, and $0.57{\pm}0.59$ mm, respectively. The setup error of the pelvic bony matching was $3.15{\pm}2.03$ mm (range, 0.25 to 8.23 mm; median, 2.95 mm), and the error of craniocaudal direction ($2.29{\pm}1.95$ mm) was significantly larger than those of anteroposterior ($1.73{\pm}1.31$ mm) and lateral directions ($0.45{\pm}0.37$ mm), respectively (p<0.05). Incidences of over 3 mm and 5 mm in setup difference among the fractionations were 1.5% and 0% in the fiducial marker matching, respectively, and 49.3% and 17.9% in the pelvic bone matching, respectively. Conclusion: The more precise setup of hypofractionated radiotherapy for prostate cancer patients is feasible with the implanted fiducial marker matching compared with the pelvic bony matching. Therefore, a less marginal expansion of planning target volume produces less radiation exposure to adjacent normal tissues, which could ultimately make hypofractionated radiotherapy safer.

Daily Setup Uncertainties and Organ Motion Based on the Tomoimages in Prostatic Radiotherapy (전립선암 치료 시 Tomoimage에 기초한 Setup 오차에 관한 고찰)

  • Cho, Jeong-Hee;Lee, Sang-Kyu;Kim, Sei-Joon;Na, Soo-Kyung
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.19 no.2
    • /
    • pp.99-106
    • /
    • 2007
  • Purpose: The patient's position and anatomy during the treatment course little bit varies to some extend due to setup uncertainties and organ motions. These factors could affected to not only the dose coverage of the gross tumor but over dosage of normal tissue. Setup uncertainties and organ motions can be minimized by precise patient positioning and rigid immobilization device but some anatomical site such as prostate, the internal organ motion due to physiological processes are challenge. In planning procedure, the clinical target volume is a little bit enlarged to create a planning target volume that accounts for setup uncertainties and organ motion as well. These uncertainties lead to differences between the calculated dose by treatment planning system and the actually delivered dose. The purpose of this study was to evaluate the differences of interfractional displacement of organ and GTV based on the tomoimages. Materials and Methods: Over the course of 3 months, 3 patients, those who has applied rectal balloon, treated for prostatic cancer patient's tomoimage were studied. During the treatment sessions 26 tomoimages per patient, Total 76 tomoimages were collected. Tomoimage had been taken everyday after initial setup with lead marker attached on the patient's skin center to comparing with C-T simulation images. Tomoimage was taken after rectal balloon inflated with 60 cc of air for prostate gland immobilization for daily treatment just before treatment and it was used routinely in each case. The intrarectal balloon was inserted to a depth of 6 cm from the anal verge. MVCT image was taken with 5 mm slice thickness after the intrarectal balloon in place and inflated. For this study, lead balls are used to guide the registration between the MVCT and CT simulation images. There are three image fusion methods in the tomotherapy, bone technique, bone/tissue technique, and full image technique. We used all this 3 methods to analysis the setup errors. Initially, image fusions were based on the visual alignment of lead ball, CT anatomy and CT simulation contours and then the radiation therapist registered the MVCT images with the CT simulation images based on the bone based, rectal balloon based and GTV based respectively and registered image was compared with each others. The average and standard deviation of each X, Y, Z and rotation from the initial planning center was calculated for each patient. The image fusions were based on the visual alignment of lead ball, CT anatomy and CT simulation contours. Results: There was a significant difference in the mean variations of the rectal balloon among the methods. Statistical results based on the bone fusion shows that maximum x-direction shift was 8 mm and 4.2 mm to the y-direction. It was statistically significant (P=<0.0001) in balloon based fusion, maximum X and Y shift was 6 mm, 16mm respectively. One patient's result was more than 16 mm shift and that was derived from the rectal expansions due to the bowl gas and stool. GTV based fusion results ranging from 2.7 to 6.6 mm to the x-direction and 4.3$\sim$7.8 mm to the y-direction respectively. We have checked rotational error in this study but there are no significant differences among fusion methods and the result was 0.37$\pm$0.36 in bone based fusion and 0.34$\pm$0.38 in GTV based fusion.

  • PDF

A Study on Legal and Regulatory Improvement Direction of Aeronautical Obstacle Management System for Aviation Safety (항공안전을 위한 장애물 제한표면 관리시스템의 법·제도적 개선방향에 관한 소고)

  • Park, Dam-Yong
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.31 no.2
    • /
    • pp.145-176
    • /
    • 2016
  • Aviation safety can be secured through regulations and policies of various areas and thorough execution of them on the field. Recently, for aviation safety management Korea is making efforts to prevent aviation accidents by taking various measures: such as selecting and promoting major strategic goals for each sector; establishing National Aviation Safety Program, including the Second Basic Plan for Aviation Policy; and improving aviation related legislations. Obstacle limitation surface is to be established and publicly notified to ensure safe take-off and landing as well as aviation safety during the circling of aircraft around airports. This study intends to review current aviation obstacle management system which was designed to make sure that buildings and structures do not exceed the height of obstacle limitation surface and identify its operating problems based on my field experience. Also, in this study, I would like to propose ways to improve the system in legal and regulatory aspects. Nowadays, due to the request of residents in the vicinity of airports, discussions and studies on aviational review are being actively carried out. Also, related ordinance and specific procedures will be established soon. However, in addition to this, I would like to propose the ways to improve shortcomings of current system caused by the lack of regulations and legislations for obstacle management. In order to execute obstacle limitation surface regulation, there has to be limits on constructing new buildings, causing real restriction for the residents living in the vicinity of airports on exercising their property rights. In this sense, it is regarded as a sensitive issue since a number of related civil complaints are filed and swift but accurate decision making is required. According to Aviation Act, currently airport operators are handling this task under the cooperation with local governments. Thus, administrative activities of local governments that have the authority to give permits for installation of buildings and structures are critically important. The law requires to carry out precise surveying of vast area and to report the outcome to the government every five years. However, there can be many problems, such as changes in the number of obstacles due to the error in the survey, or failure to apply for consultation with local governments on the exercise of construction permission. However, there is neither standards for allowable errors, preventive measures, nor penalty for the violation of appropriate procedures. As such, only follow-up measures can be taken. Nevertheless, once construction of a building is completed violating the obstacle limitation surface, practically it is difficult to take any measures, including the elimination of the building, because the owner of the building would have been following legal process for the construction by getting permit from the government. In order to address this problem, I believe penalty provision for the violation of Aviation Act needs to be added. Also, it is required to apply the same standards of allowable error stipulated in Building Act to precise surveying in the aviation field. Hence, I would like to propose the ways to improve current system in an effective manner.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Study of the Impact of Light Through the Vitamin $B_{12}$/Folate Inspection (Vitamin $B_{12}$/Folate 검사 시 빛의 영향에 대한 고찰)

  • Cho, Eun Bit;Pack, Song Ran;Kim, Whe Jung;Kim, Seong Ho;Yoo, Seon Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.162-166
    • /
    • 2012
  • Purpose : Vitamin $B_{12}$ and Folate are for anemia work-up which is well known for its sensitivity of light; the screening manual also specifies to be careful with light conditions. According to this, our laboratory minimized the exposure of light when inspecting the Vitamin $B_{12}$ and Folate, but the exposure cannot be wholly blocked due to other various factors such as when conducting specimen segregation. Thus, this inspection is to identify to what extent light can influence and whether the exclusion of light is mandatory during the Vitamin $B_{12}$/Folate test. Materials and Methods : We have conducted two experiments of identifying the extent of light's influence when conducting the Vitamin $B_{12}$/Folate test and also when specimens are under preservation. These experiments were progressed with various concentrations of patients' specimens which were requested to our hospital in March 2012. The first experiment is to verify the results on Vitamin $B_{12}$/Folate dependent on light exposure during the experiment. In the process, we have compared the results of light exposure/exclusion during the incubation process after the reagent division. The second experiment is about the impact of light exposure on the results on Vitamin $B_{12}$/Folate during the preservation. For 1, 2, 7 days the light on the specimen were wholly blocked and were preserved under $-15^{\circ}C$ temperature refrigeration. Then, we compared the results of light-excluded specimen and the exposed one. Results : When conducting first experiment, there were no noticeable changes in the Standard and specimen's cpm, but for Vitamin $B_{12}$, the average result of specimen exposed to light increased 7.8% compare to that of excluded one's. Furthermore, in the significant level 0.05, the significance probability or the p-value was 0.251 which means it has no impact. For Folate, the result being exposed to light decreased 5.4%, the significance probability was 0.033 which means it has little impact. For the second preservation, the result was dependent on the light exposure. The first day of preservation of Vitamin $B_{12}$, the clinical material exposed to light was 11.6%, second day clinical material exposed to light was 10.8%, seventh day clinical material exposed to light increased 3.8%, the significance probability of the $1^{st}$, $2^{nd}$, $7^{th}$ day is 0.372, 0.033, 0.144 respectively, and which indicates that the $1^{st}$ and $7^{th}$ day seems to have no impact. For Folate's case, the clinical material exposed to light has increased 1.4% but hardly had impact, $2^{nd}$ day clinical material being exposed to light was 6.1%, $7^{Th}$ day clinical material being exposed to light decreased 5.2%. The significance probability of Folate on the $1^{st}$, $2^{nd}$, $7^{th}$ day is 0.378, 0.037, 0.217 respectively, and the $1^{st}$ day and the $7^{th}$ day seems to have no impact. Conclusion : After scrutinizing the impact of light exposure/exclusion, Vitamin $B_{12}$ has no impact, while Folate seems to have no noticeable influence but light exclusion is recommended due to its significance probability of 0.033 when conducting experiment. During the preservation, the $2^{nd}$ day result depend on the light exclusion seems to have impact or influence. However, to consider the complication of the experimental process, the experiment including technical errors is predictable. Hence, it is likely to have no impact of light. Nevertheless, it is recommendable to exclude the light during the long preservation as per the significance probability (p-value) of $1^{st}$ and $7^{th}$ day has been diminished.

  • PDF

Relationships on Magnitude and Frequency of Freshwater Discharge and Rainfall in the Altered Yeongsan Estuary (영산강 하구의 방류와 강우의 규모 및 빈도 상관성 분석)

  • Rhew, Ho-Sang;Lee, Guan-Hong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.4
    • /
    • pp.223-237
    • /
    • 2011
  • The intermittent freshwater discharge has an critical influence upon the biophysical environments and the ecosystems of the Yeongsan Estuary where the estuary dam altered the continuous mixing of saltwater and freshwater. Though freshwater discharge is controlled by human, the extreme events are mainly driven by the heavy rainfall in the river basin, and provide various impacts, depending on its magnitude and frequency. This research aims to evaluate the magnitude and frequency of extreme freshwater discharges, and to establish the magnitude-frequency relationships between basin-wide rainfall and freshwater inflow. Daily discharge and daily basin-averaged rainfall from Jan 1, 1997 to Aug 31, 2010 were used to determine the relations between discharge and rainfall. Consecutive daily discharges were grouped into independent events using well-defined event-separation algorithm. Partial duration series were extracted to obtain the proper probability distribution function for extreme discharges and corresponding rainfall events. Extreme discharge events over the threshold 133,656,000 $m^3$ count up to 46 for 13.7y years, following the Weibull distribution with k=1.4. The 3-day accumulated rain-falls which occurred one day before peak discharges (1day-before-3day -sum rainfall), are determined as a control variable for discharge, because their magnitude is best correlated with that of the extreme discharge events. The minimum value of the corresponding 1day-before-3day-sum rainfall, 50.98mm is initially set to a threshold for the selection of discharge-inducing rainfall cases. The number of 1day-before-3day-sum rainfall groups after selection, however, exceeds that of the extreme discharge events. The canonical discriminant analysis indicates that water level over target level (-1.35 m EL.) can be useful to divide the 1day-before-3day-sum rainfall groups into discharge-induced and non-discharge ones. It also shows that the newly-set threshold, 104mm, can just separate these two cases without errors. The magnitude-frequency relationships between rainfall and discharge are established with the newly-selected lday-before-3day-sum rainfalls: $D=1.111{\times}10^8+1.677{\times}10^6{\overline{r_{3day}}$, (${\overline{r_{3day}}{\geqq}104$, $R^2=0.459$), $T_d=1.326T^{0.683}_{r3}$, $T_d=0.117{\exp}[0.0155{\overline{r_{3day}}]$, where D is the quantity of discharge, ${\overline{r_{3day}}$ the 1day-before-3day-sum rainfall, $T_{r3}$ and $T_d$, are respectively return periods of 1day-before-3day-sum rainfall and freshwater discharge. These relations provide the framework to evaluate the effect of freshwater discharge on estuarine flow structure, water quality, responses of ecosystems from the perspective of magnitude and frequency.

A Study of The Medical Classics in the '$\bar{A}yurveda$' ('아유르베다'($\bar{A}yurveda$)의 의경(醫經)에 관한 연구)

  • Kim, Ki-Wook;Park, Hyun-Kuk;Seo, Ji-Young
    • Journal of Korean Medical classics
    • /
    • v.20 no.4
    • /
    • pp.91-117
    • /
    • 2007
  • Through a simple study of the medical classics in the '$\bar{A}yurveda$', we have summarized them as follows. 1) Traditional Indian medicine started in the Ganges river area at about 1500 B. C. E. and traces of medical science can be found in the "Rigveda" and "Atharvaveda". 2) The "Charaka" and "$Su\acute{s}hruta$(妙聞集)", ancient texts from India, are not the work of one person, but the result of the work and errors of different doctors and philosophers. Due to the lack of historical records, the time of Charaka or $Su\acute{s}hruta$(妙聞)s' lives are not exactly known. So the completion of the "Charaka" is estimated at 1st${\sim}$2nd century C. E. in northwestern India, and the "$Su\acute{s}hruta$" is estimated to have been completed in 3rd${\sim}$4th century C. E. in central India. Also, the "Charaka" contains details on internal medicine, while the "$Su\acute{s}hruta$" contains more details on surgery by comparison. 3) '$V\bar{a}gbhata$', one of the revered Vriddha Trayi(triad of the ancients, 三醫聖) of the '$\bar{A}yurveda$', lived and worked in about the 7th century and wrote the "$A\d{s}\d{t}\bar{a}nga$ $A\d{s}\d{t}\bar{a}nga$ $h\d{r}daya$ $sa\d{m}hit\bar{a}$ $samhit\bar{a}$(八支集)" and "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$(八心集)", where he tried to compromise and unify the "Charaka" and "$Su\acute{s}hruta$". The "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$" was translated into Tibetan and Arabic at about the 8th${\sim}$9th century, and if we generalize the medicinal plants recorded in each the "Charaka", "$Su\acute{s}hruta$" and the "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$", there are 240, 370, 240 types each. 4) The 'Madhava' focused on one of the subjects of Indian medicine, '$Nid\bar{a}na$' ie meaning "the cause of diseases(病因論)", and in one of the copies found by Bower in 4th century C. E. we can see that it uses prescriptions from the "BuHaLaJi(布哈拉集)", "Charaka", "$Su\acute{s}hruta$". 5) According to the "Charaka", there were 8 branches of ancient medicine in India : treatment of the body(kayacikitsa), special surgery(salakya), removal of alien substances(salyapahartka), treatment of poison or mis-combined medicines(visagaravairodhikaprasamana), the study of ghosts(bhutavidya), pediatrics(kaumarabhrtya), perennial youth and long life(rasayana), and the strengthening of the essence of the body(vajikarana). 6) The '$\bar{A}yurveda$', which originated from ancient experience, was recorded in Sanskrit, which was a theorization of knowledge, and also was written in verses to make memorizing easy, and made medicine the exclusive possession of the Brahmin. The first annotations were 1060 for the "Charaka", 1200 for the "$Su\acute{s}hruta$", 1150 for the "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$", and 1100 for the "$Nid\bar{a}na$", The use of various mineral medicines in the "Charaka" or the use of mercury as internal medicine in the "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$", and the palpation of the pulse for diagnosing in the '$\bar{A}yurveda$' and 'XiZhang(西藏)' medicine are similar to TCM's pulse diagnostics. The coexistence with Arabian 'Unani' medicine, compromise with western medicine and the reactionism trend restored the '$\bar{A}yurveda$' today. 7) The "Charaka" is a book inclined to internal medicine that investigates the origin of human disease which used the dualism of the 'Samkhya', the natural philosophy of the 'Vaisesika' and the logic of the 'Nyaya' in medical theories, and its structure has 16 syllables per line, 2 lines per poem and is recorded in poetry and prose. Also, the "Charaka" can be summarized into the introduction, cause, judgement, body, sensory organs, treatment, pharmaceuticals, and end, and can be seen as a work that strongly reflects the moral code of Brahmin and Aryans. 8) In extracting bloody pus, the "Charaka" introduces a 'sharp tool' bloodletting treatment, while the "$Su\scute{s}hruta$" introduces many surgical methods such as the use of gourd dippers, horns, sucking the blood with leeches. Also the "$Su\acute{s}hruta$" has 19 chapters specializing in ophthalmology, and shows 76 types of eye diseases and their treatments. 9) Since anatomy did not develop in Indian medicine, the inner structure of the human body was not well known. The only exception is 'GuXiangXue(骨相學)' which developed from 'Atharvaveda' times and the "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$". In the "$A\d{s}\d{t}\bar{a}nga$ Sangraha $samhit\bar{a}$"'s 'ShenTiLun(身體論)' there is a thorough listing of the development of a child from pregnancy to birth. The '$\bar{A}yurveda$' is not just an ancient traditional medical system but is being called alternative medicine in the west because of its ability to supplement western medicine and, as its effects are being proved scientifically it is gaining attention worldwide. We would like to say that what we have researched is just a small fragment and a limited view, and would like to correct and supplement any insufficient parts through more research of new records.

  • PDF

Derivation of the Synthetic Unit Hydrograph Based on the Watershed Characteristics (유역특성에 의한 합성단위도의 유도에 관한 연구)

  • 서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.17 no.1
    • /
    • pp.3642-3654
    • /
    • 1975
  • The purpose of this thesis is to derive a unit hydrograph which may be applied to the ungaged watershed area from the relations between directly measurable unitgraph properties such as peak discharge(qp), time to peak discharge (Tp), and lag time (Lg) and watershed characteristics such as river length(L) from the given station to the upstream limits of the watershed area in km, river length from station to centroid of gravity of the watershed area in km (Lca), and main stream slope in meter per km (S). Other procedure based on routing a time-area diagram through catchment storage named Instantaneous Unit Hydrograph(IUH). Dimensionless unitgraph also analysed in brief. The basic data (1969 to 1973) used in these studies are 9 recording level gages and rating curves, 41 rain gages and pluviographs, and 40 observed unitgraphs through the 9 sub watersheds in Nak Oong River basin. The results summarized in these studies are as follows; 1. Time in hour from start of rise to peak rate (Tp) generally occured at the position of 0.3Tb (time base of hydrograph) with some indication of higher values for larger watershed. The base flow is comparelatively higher than the other small watershed area. 2. Te losses from rainfall were divided into initial loss and continuing loss. Initial loss may be defined as that portion of storm rainfall which is intercepted by vegetation, held in deppression storage or infiltrated at a high rate early in the storm and continuing loss is defined as the loss which continues at a constant rate throughout the duration of the storm after the initial loss has been satisfied. Tis continuing loss approximates the nearly constant rate of infiltration (${\Phi}$-index method). The loss rate from this analysis was estimated 50 Per cent to the rainfall excess approximately during the surface runoff occured. 3. Stream slope seems approximate, as is usual, to consider the mainstreamonly, not giving any specific consideration to tributary. It is desirable to develop a single measure of slope that is representative of the who1e stream. The mean slope of channel increment in 1 meter per 200 meters and 1 meter per 1400 meters were defined at Gazang and Jindong respectively. It is considered that the slopes are low slightly in the light of other river studies. Flood concentration rate might slightly be low in the Nak Dong river basin. 4. It found that the watershed lag (Lg, hrs) could be expressed by Lg=0.253 (L.Lca)0.4171 The product L.Lca is a measure of the size and shape of the watershed. For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the watershed characteristics, L and Lca. 5. Expression for basin might be expected to take form containing theslope as {{{{ { L}_{g }=0.545 {( { L. { L}_{ca } } over { SQRT {s} } ) }^{0.346 } }}}} For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the basin characteristics too. It should be needed to take care of analysis which relating to the mean slopes 6. Peak discharge per unit area of unitgraph for standard duration tr, ㎥/sec/$\textrm{km}^2$, was given by qp=10-0.52-0.0184Lg with a indication of lower values for watershed contrary to the higher lag time. For the logarithms, the correlation coefficient qp was 0.998 which defined high sign ificance. The peak discharge of the unitgraph for an area could therefore be expected to take the from Qp=qp. A(㎥/sec). 7. Using the unitgraph parameter Lg, the base length of the unitgraph, in days, was adopted as {{{{ {T}_{b } =0.73+2.073( { { L}_{g } } over {24 } )}}}} with high significant correlation coefficient, 0.92. The constant of the above equation are fixed by the procedure used to separate base flow from direct runoff. 8. The width W75 of the unitgraph at discharge equal to 75 per cent of the peak discharge, in hours and the width W50 at discharge equal to 50 Per cent of the peak discharge in hours, can be estimated from {{{{ { W}_{75 }= { 1.61} over { { q}_{b } ^{1.05 } } }}}} and {{{{ { W}_{50 }= { 2.5} over { { q}_{b } ^{1.05 } } }}}} respectively. This provides supplementary guide for sketching the unitgraph. 9. Above equations define the three factors necessary to construct the unitgraph for duration tr. For the duration tR, the lag is LgR=Lg+0.2(tR-tr) and this modified lag, LgRis used in qp and Tb It the tr happens to be equal to or close to tR, further assume qpR=qp. 10. Triangular hydrograph is a dimensionless unitgraph prepared from the 40 unitgraphs. The equation is shown as {{{{ { q}_{p } = { K.A.Q} over { { T}_{p } } }}}} or {{{{ { q}_{p } = { 0.21A.Q} over { { T}_{p } } }}}} The constant 0.21 is defined to Nak Dong River basin. 11. The base length of the time-area diagram for the IUH routing is {{{{C=0.9 {( { L. { L}_{ca } } over { SQRT { s} } ) }^{1/3 } }}}}. Correlation coefficient for C was 0.983 which defined a high significance. The base length of the T-AD was set to equal the time from the midpoint of rain fall excess to the point of contraflexure. The constant K, derived in this studies is K=8.32+0.0213 {{{{ { L} over { SQRT { s} } }}}} with correlation coefficient, 0.964. 12. In the light of the results analysed in these studies, average errors in the peak discharge of the Synthetic unitgraph, Triangular unitgraph, and IUH were estimated as 2.2, 7.7 and 6.4 per cent respectively to the peak of observed average unitgraph. Each ordinate of the Synthetic unitgraph was approached closely to the observed one.

  • PDF