• Title/Summary/Keyword: a feasibility

Search Result 10,411, Processing Time 0.044 seconds

A Study on the Revitalization of the Competency Assessment System in the Public Sector : Compare with Private Sector Operations (공공부문 역량평가제도의 활성화 방안에 대한 연구 : 민간부분의 운영방식과의 비교 연구)

  • Kwon, Yong-man;Jeong, Jang-ho
    • Journal of Venture Innovation
    • /
    • v.4 no.1
    • /
    • pp.51-65
    • /
    • 2021
  • The HR policy in the public sector was closed and operated mainly on written tests, but in 2006, a new evaluation, promotion and education system based on competence was introduced in the promotion and selection system of civil servants. In particular, the seniority-oriented promotion system was evaluated based on competence by operating an Assessment Center related to promotion. Competency evaluation is known to be the most reliable and valid evaluation method among the evaluation methods used to date and is also known to have high predictive feasibility for performance. In 2001, 19 government standard competency models were designed. In 2006, the competency assessment was implemented with the implementation of the high-ranking civil service team system. In the public sector, the purpose of the competency evaluation is mainly to select third-grade civil servants, assign fourth-grade civil servants, and promotion fifth-grade civil servants. However, competency assessments in the public sector differ in terms of competency assessment objectives, assessment processes and competency assessment programmes compared to those in the private sector. For the purposes of competency assessment, the public sector is for the promotion of candidates, and the private sector focuses on career development and fostering. Therefore, it is not continuously developing capabilities than the private sector and is not used to enhance performance in performing its duties. In relation to evaluation items, the public sector generally operates a system that passes capacity assessment at 2.5 out of 5 for 6 competencies, lacks feedback on what competencies are lacking, and the private sector uses each individual's competency score. Regarding the selection and operation of evaluators, the public sector focuses on fairness in evaluation, and the private sector focuses on usability, which is inconsistent with the aspect of developing capabilities and utilizing human resources in the right place. Therefore, the public sector should also improve measures to identify outstanding people and motivate them through capacity evaluation and change the operation of the capacity evaluation system so that they can grow into better managers through accurate reports and individual feedback

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

A Thermal Time-Driven Dormancy Index as a Complementary Criterion for Grape Vine Freeze Risk Evaluation (포도 동해위험 판정기준으로서 온도시간 기반의 휴면심도 이용)

  • Kwon, Eun-Young;Jung, Jea-Eun;Chung, U-Ran;Lee, Seung-Jong;Song, Gi-Cheol;Choi, Dong-Geun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2006
  • Regardless of the recent observed warmer winters in Korea, more freeze injuries and associated economic losses are reported in fruit industry than ever before. Existing freeze-frost forecasting systems employ only daily minimum temperature for judging the potential damage on dormant flowering buds but cannot accommodate potential biological responses such as short-term acclimation of plants to severe weather episodes as well as annual variation in climate. We introduce 'dormancy depth', in addition to daily minimum temperature, as a complementary criterion for judging the potential damage of freezing temperatures on dormant flowering buds of grape vines. Dormancy depth can be estimated by a phonology model driven by daily maximum and minimum temperature and is expected to make a reasonable proxy for physiological tolerance of buds to low temperature. Dormancy depth at a selected site was estimated for a climatological normal year by this model, and we found a close similarity in time course change pattern between the estimated dormancy depth and the known cold tolerance of fruit trees. Inter-annual and spatial variation in dormancy depth were identified by this method, showing the feasibility of using dormancy depth as a proxy indicator for tolerance to low temperature during the winter season. The model was applied to 10 vineyards which were recently damaged by a cold spell, and a temperature-dormancy depth-freeze injury relationship was formulated into an exponential-saturation model which can be used for judging freeze risk under a given set of temperature and dormancy depth. Based on this model and the expected lowest temperature with a 10-year recurrence interval, a freeze risk probability map was produced for Hwaseong County, Korea. The results seemed to explain why the vineyards in the warmer part of Hwaseong County have been hit by more freeBe damage than those in the cooler part of the county. A dormancy depth-minimum temperature dual engine freeze warning system was designed for vineyards in major production counties in Korea by combining the site-specific dormancy depth and minimum temperature forecasts with the freeze risk model. In this system, daily accumulation of thermal time since last fall leads to the dormancy state (depth) for today. The regional minimum temperature forecast for tomorrow by the Korea Meteorological Administration is converted to the site specific forecast at a 30m resolution. These data are input to the freeze risk model and the percent damage probability is calculated for each grid cell and mapped for the entire county. Similar approaches may be used to develop freeze warning systems for other deciduous fruit trees.

Asymptomatic Primary Hematuria in Children (소아의 무증상성 일차성 혈뇨에 관한 고찰)

  • Lee, Jung-Mi;Park, Woo-Saeng;Ko, Cheol-Woo;Koo, Ja-Hoon;Kwak, Jung-Sik
    • Childhood Kidney Diseases
    • /
    • v.4 no.1
    • /
    • pp.25-32
    • /
    • 2000
  • Purpose: This retrospective study of 126 children with symptomless primary hematuria was undertaken to determine the distribution of various histologic types by renal biopsy, clinical outcome according to the biopsy findings and also to find out feasibility of performing renal biopsy in these children. Patients and Methods : Study population consisted of 126 children with symptom-less primary hematuria who have been admitted to the pediatric department of Kyung-poot National University Hospital for the past 11 years from 1987 to 1998 and renal biopsy was performed percutaneously. Hematuric children with duration of less than 6 months, evidences of systemic illness such as SLE or Henoch-Schonlein purpura, urinary tract infection, and idiopathic hypercalciuria were excluded from the study. Results : Mean age of presentation was 9.2${\pm}$3.3 years (range ; 1.5-15.3 years) and male preponderance was noted with male to female ratio of 2:1. IgA nephropathy was the most common biopsy finding occuring in 60 children ($47.6\%$), followed by MsPGN in 13 ($10.3\%$), MPGN in 5 ($3.9\%$), TGBM in 6 ($4.7\%$), Alport syndrome in 2 ($1.6\%$), FSGS in 1 ($0.8\%$), and in 39 children ($30.9\%$), 'normal' glomeruli were noted. Recurrent gross hematuria was more common than persistent microscopic hematuria (84 versus 42), and especially in IgA nephropathy, recurrent gross hematuria was the most prevalent pattern of hematuria. In 58 out of 126 cases ($46.0\%$), hematuria was isolated without accompa-nying proteinuria and this was especially true In cases of MsPGN and 'normal' glomer-uli by biopsy finding. Normalization of urinalysis (disappearance of hematuria) in IgA nephropathy, MsPGN and 'normal' glomuli group were similar and it was $14\%,\;27\%\;and\;21\%$ respectively during 1-2 years of follow-up period, and $37.1\%,\;40\%\;and\;35\%$ respectively during 3-4 years of follow-up periods. However, abnormal urinalysis persi-sted in the majority of children with MPGN, TGBM. Alport syndrome and FSGS. Renal function deteriorated progressively in 6 cases (3 with IgA nephropathy, 2 with Alport syndrome and 1 with TGBM). Conclusion : In summary, present study demonstrates that in 126 children with symptomless primary hematuria, IgA nephropathy was the most common biopsy findings followed by MsPGN, MPGN, TGBM, Alport syndrome and FSGS, and 'normal glomeruli' was also seen in 39 cases ($30.9\%$). Renal histology could not be predictable on the clinical findings, so that to establish appropriate long-term planning for these children, we would recommend to obtain precise histologic diagnosis by renal biopsy.

  • PDF

Removal of Red Tide Organisms -2. Flocculation of Red Tide Organisms by Using Loess- (적조생물의 구제 -2. 황토에 의한 적조생물의 응집제거-)

  • KIM Sung-Jae
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.33 no.5
    • /
    • pp.455-462
    • /
    • 2000
  • The objective of this study was to examine the physicochemical characteristics of coagulation reaction between loess and red tide organisms (RTO) and its feasibility, in developing a technology for the removal of RTO bloom in coastal sea. The physicochemical characteristics of loess were examined for a particle size distribution, surface characteristics by scanning electron microscope, zeta potential, and alkalinity and pH variations in sea water. Two kinds of RTO that were used in this study, Cylindrothen closterium and Skeietonema costatum, were sampled in Masan bay and were cultured in laboratory. Coagulation experiments were conducted using various concentrations of loess, RTO, and a jar tester. The supernatant and RTO culture solution were analyzed for pH, alkalinity, RTO cell number. A negative zeta potential of loess increased with increasing pH at $10^(-3)M$ NaCl solution and had -71.3 mV at pH 9.36. Loess had a positive zeta potential of +1,8 mV at pH 1.98, which resulted in a characteristic of material having an amphoteric surface charge. In NaCl and $CaCl_2$, solutions, loess had a decreasing negative zeta potential with increasing $Na^+\;and\;Ca^(+2)$ ion concentration and then didn't result in a charge reversal due to not occurring specific adsorption for $Na^+$ ion while resulted in a charge reversal due to occurring specific adsorption for $Ca^(+2)$ ion. In sea water, loess and RTO showed the similar zeta potential values of -112,1 and -9.2 mV, respectively and sea sand powder showed the highest zeta potential value of -25.7 mV in the clays. EDLs (electrical double-layers) of loess and RTO were extremely compressed due to high concentration of salts included in sea water, As a result, there didn't almost exist EDL repulsive force between loess and RTO approaching each other and then LVDW (London-yan der Waals) attractive force was always larger than EDL repulsive force to easily form a floe. Removal rates of RTO exponentially increased with increasing a loess concentration. The removal rates steeply increased until $800 mg/l$ of loess, and reached $100{\%}$ at 6,400 mg/l of loess. Removal rates of RTO exponentially increased with increasing a G-value. This indicated that mixing (i.e., collision among particles) was very important for a coagulation reaction. Loess showed the highest RTO removal rates in the clays.

  • PDF

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Investigation of the Rice Plant Transfer and the Leaching Characteristics of Copper and Lead for the Stabilization Process with a Pilot Scale Test (논토양 안정화 현장 실증 시험을 통한 납, 구리의 용출 저감 및 벼로의 식물전이 특성 규명)

  • Lee, Ha-Jung;Lee, Min-Hee
    • Economic and Environmental Geology
    • /
    • v.45 no.3
    • /
    • pp.255-264
    • /
    • 2012
  • The stabilization using limestone ($CaCO_3$) and steel making slag as the immobilization amendments for Cu and Pb contaminated farmland soils was investigated by batch tests, continuous column experiments and the pilot scale feasibility study with 4 testing grounds at the contaminated site. From the results of batch experiment, the amendment with the mixture of 3% of limestone and 2% of steel making slag reduced more than 85% of Cu and Pb compared with the soil without amendment. The acryl column (1 m in length and 15 cm in diameter) equipped with valves, tubes and a sprinkler was used for the continuous column experiments. Without the amendment, the Pb concentration of the leachate from the column maintained higher than 0.1 mg/L (groundwater tolerance limit). However, the amendment with 3% limestone and 2% steel making slag reduced more than 60% of Pb leaching concentration within 1 year and the Pb concentration of leachate maintained below 0.04 mg/L. For the testing ground without the amendment, the Pb and Cu concentrations of soil water after 60 days incubation were 0.38 mg/L and 0.69 mg/l, respectively, suggesting that the continuous leaching of Cu and Pb may occur from the site. For the testing ground amended with mixture of 3% of limestone + 2% of steel making slag, no water soluble Pb and Cu were detected after 20 days incubation. For all testing grounds, the ratio of Pb and Cu transfer to plant showed as following: root > leaves(including stem) > rice grain. The amendment with limestone and steel making slag reduced more than 75% Pb and Cu transfer to plant comparing with no amendment. The results of this study showed that the amendment with mixture of limestone and steel making slag decreases not only the leaching of heavy metals but also the plant transfer from the soil.

Feasibility study of the usefulness of SRS thermoplastic mask for head & neck cancer in tomotherapy (두경부 종양의 토모치료 시 정위적방사선수술 마스크의 유용성 평가에 대한 연구)

  • Jeon, Seong Jin;Kim, Chul Jong;Kwon, Dong Yeol;Kim, Jong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.355-362
    • /
    • 2014
  • Purpose : When head&neck cancer radiation therapy, thermoplastic mask is applied for patients with fixed. The purpose of this study is to evaluate usefulness of thermoplastic mask for SRS in tomotherapy by conparison with the conventional mask. Materials and Methods : Typical mask(conventional mask, C-mask) and mask for SRS are used to fix body phantom(rando phantom) on the same iso centerline, then simulation is performed. Tomotherapy plan for orbit and salivary glands is made by treatment planning system(TPS). A thick portion and a thin portion located near the treatment target relative to the mask S-mask are defined as region of interest for surface dose dosimetry. Surface dose variation depending on the type of mask was analyzed by measuring the TPS and EBT film. Results : Surface dose variation due to the type of mask from the TPS is showed in orbit and salivary glands 0.65~2.53 Gy, 0.85~1.84 Gy, respectively. In case of EBT film, -0.2~3.46 Gy, 1.04~3.02 Gy. When applied to the S-mask, in TPS and Gafchromic EBT3 film, substrantially 4.26%, 5.82% showed maximum changing trend, respectively. Conclusion : To apply S-mask for tomotherapy, surface dose is changed, but the amount is insignificant and be useful when treatment target is close critical organs because decrease inter and intra fractional variation.

Study of the Tone Variation on Juniperus chinensis L. and Populus glandulosa Uyeki by Photographs (사진상(寫眞上)에 나타난 향나무(Juniperus chinensis L.)와 수원사시나무(Populus glandulosa Uyeki)의 색조변화(色調變化)에 관(關)한 연구(硏究))

  • Kim, Kap Duk
    • Journal of Korean Society of Forest Science
    • /
    • v.13 no.1
    • /
    • pp.1-24
    • /
    • 1971
  • In order to elucidate the feasibility to identify the plant species through the photographic tone this study was made. In this study the author made photographs of Juniperus chinensis L., and Populus glandulosa Uyeki, with the panchromatic film using either yellow filter or red filter in different seasons respectively. The author analyzed the value of tone variation at the level of stereoscopic view of the same photographs by using tone scale and Automatic Micro-Photo-Densitometer. The results obtained are summarized as follows: A. Tone scale reading 1. The tone of Populus glandulosa Uyeki was darker than that of Juniperus chinensis L. in a photograph. 2. Regardless of the tree species, tone of photographs obtained with yellow filter was darker than that with red filter. 3. Along the progresses of seasons, the photographic tone was changed. That is, from the spring to the summer it showed darker and than, from the summer to winter it changed lighter. 4. During winter and spring the discrimination between the both species of trees can be easily made by stereoscopic view whether there are leaves on the tree or not rather than by tone observation. 5. Regardless of tree species, variation of tone due to the age was noticed. The older trees have darker tone than the younger one. 6. It is recognized that the yearly difference depends on insolation quantities. 7. The highest reflex light-waves were in between $600m{\mu}$ and $660m{\mu}$ for both species of trees. B. Density reading 1. For the density reading, there was the same tendency as in tone scale reading. 2. The changes of peaks of the scanning curves with Populus glandulosa Uyeki is smoother and takes place in lower position than with Juniperus chinensis L. 3. The scanning curves on the 20th May was smoothest, and change of peaks increased gradually according to the season progresses. 4. In relation to the types of filter the photographs with yellow filter showed less changes of peaks then that with red filter.

  • PDF