• Title/Summary/Keyword: computation

Search Result 7,980, Processing Time 0.039 seconds

A stratified random sampling design for paddy fields: Optimized stratification and sample allocation for effective spatial modeling and mapping of the impact of climate changes on agricultural system in Korea (농지 공간격자 자료의 층화랜덤샘플링: 농업시스템 기후변화 영향 공간모델링을 위한 국내 농지 최적 층화 및 샘플 수 최적화 연구)

  • Minyoung Lee;Yongeun Kim;Jinsol Hong;Kijong Cho
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.4
    • /
    • pp.526-535
    • /
    • 2021
  • Spatial sampling design plays an important role in GIS-based modeling studies because it increases modeling efficiency while reducing the cost of sampling. In the field of agricultural systems, research demand for high-resolution spatial databased modeling to predict and evaluate climate change impacts is growing rapidly. Accordingly, the need and importance of spatial sampling design are increasing. The purpose of this study was to design spatial sampling of paddy fields (11,386 grids with 1 km spatial resolution) in Korea for use in agricultural spatial modeling. A stratified random sampling design was developed and applied in 2030s, 2050s, and 2080s under two RCP scenarios of 4.5 and 8.5. Twenty-five weather and four soil characteristics were used as stratification variables. Stratification and sample allocation were optimized to ensure minimum sample size under given precision constraints for 16 target variables such as crop yield, greenhouse gas emission, and pest distribution. Precision and accuracy of the sampling were evaluated through sampling simulations based on coefficient of variation (CV) and relative bias, respectively. As a result, the paddy field could be optimized in the range of 5 to 21 strata and 46 to 69 samples. Evaluation results showed that target variables were within precision constraints (CV<0.05 except for crop yield) with low bias values (below 3%). These results can contribute to reducing sampling cost and computation time while having high predictive power. It is expected to be widely used as a representative sample grid in various agriculture spatial modeling studies.

Conjunction Assessments of the Satellites Transported by KSLV-II and Preparation of the Countermeasure for Possible Events in Timeline (누리호 탑재 위성들의 충돌위험의 예측 및 향후 상황의 대응을 위한 분석)

  • Shawn Seunghwan Choi;Peter Joonghyung Ryu;John Kim;Lowell Kim;Chris Sheen;Yongil Kim;Jaejin Lee;Sunghwan Choi;Jae Wook Song;Hae-Dong Kim;Misoon Mah;Douglas Deok-Soo Kim
    • Journal of Space Technology and Applications
    • /
    • v.3 no.2
    • /
    • pp.118-143
    • /
    • 2023
  • Space is becoming more commercialized. Despite of its delayed start-up, space activities in Korea are attracting more nation-wide supports from both investors and government. May 25, 2023, KSLV II, also called Nuri, successfully transported, and inserted seven satellites to a sun-synchronous orbit of 550 km altitude. However, Starlink has over 4,000 satellites around this altitude for its commercial activities. Hence, it is necessary for us to constantly monitor the collision risks of these satellites against resident space objects including Starlink. Here we report a quantitative research output regarding the conjunctions, particularly between the Nuri satellites and Starlink. Our calculation shows that, on average, three times everyday, the Nuri satellites encounter Starlink within 1 km distance with the probability of collision higher than 1.0E-5. A comparative study with KOMPSAT-5, also called Arirang-5, shows that its distance of closest approach distribution significantly differs from those of Nuri satellites. We also report a quantitative analysis of collision-avoiding maneuver cost of Starlink satellites and a strategy for Korea, being a delayed starter, to speed up to position itself in the space leading countries. We used the AstroOne program for analyses and compared its output with that of Socrates Plus of Celestrak. The two line element data was used for computation.

Comparison between Uncertainties of Cultivar Parameter Estimates Obtained Using Error Calculation Methods for Forage Rice Cultivars (오차 계산 방식에 따른 사료용 벼 품종의 품종모수 추정치 불확도 비교)

  • Young Sang Joh;Shinwoo Hyun;Kwang Soo Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.3
    • /
    • pp.129-141
    • /
    • 2023
  • Crop models have been used to predict yield under diverse environmental and cultivation conditions, which can be used to support decisions on the management of forage crop. Cultivar parameters are one of required inputs to crop models in order to represent genetic properties for a given forage cultivar. The objectives of this study were to compare calibration and ensemble approaches in order to minimize the uncertainty of crop yield estimates using the SIMPLE crop model. Cultivar parameters were calibrated using Log-likelihood (LL) and Generic Composite Similarity Measure (GCSM) as an objective function for Metropolis-Hastings (MH) algorithm. In total, 20 sets of cultivar parameters were generated for each method. Two types of ensemble approach. First type of ensemble approach was the average of model outputs (Eem), using individual parameters. The second ensemble approach was model output (Epm) of cultivar parameter obtained by averaging given 20 sets of parameters. Comparison was done for each cultivar and for each error calculation methods. 'Jowoo' and 'Yeongwoo', which are forage rice cultivars used in Korea, were subject to the parameter calibration. Yield data were obtained from experiment fields at Suwon, Jeonju, Naju and I ksan. Data for 2013, 2014 and 2016 were used for parameter calibration. For validation, yield data reported from 2016 to 2018 at Suwon was used. Initial calibration indicated that genetic coefficients obtained by LL were distributed in a narrower range than coefficients obtained by GCSM. A two-sample t-test was performed to compare between different methods of ensemble approaches and no significant difference was found between them. Uncertainty of GCSM can be neutralized by adjusting the acceptance probability. The other ensemble method (Epm) indicates that the uncertainty can be reduced with less computation using ensemble approach.

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

    • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
      • Journal of Intelligence and Information Systems
      • /
      • v.24 no.1
      • /
      • pp.205-225
      • /
      • 2018
    • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

    A Study on Hoslital Nurses' Preferred Duty Shift and Duty Hours (병원 간호사의 선호근무시간대에 관한 연구)

    • Lee, Gyeong-Sik;Jeong, Geum-Hui
      • The Korean Nurse
      • /
      • v.36 no.1
      • /
      • pp.77-96
      • /
      • 1997
    • The duty shifts of hospital nurses not only affect nurses' physical and mental health but also present various personnel management problems which often result in high turnover rates. In this context a study was carried out from October to November 1995 for a period of two months to find out the status of hospital nurses' duty shift patterns, and preferred duty hours and fixed duty shifts. The study population was 867 RNs working in five general hospitals located in Seoul and its vicinity. The questionnaire developed by the writer was used for data collection. The response rate was 85.9 percent or 745 returns. The SAS program was used for data analysis with the computation of frequencies, percentages and Chi square test. The findings of the study are as follows: 1. General characteristics of the study population: 56 percent of respondents was (25 years group and 76.5 percent were "single": the predominant proportion of respondents was junior nursing college graduates(92.2%) and have less than 5 years nursing experience in hospitals(65.5%). For their future working plan in nursing profession, nearly 50% responded as uncertain The reasons given for their career plan was predominantly 'personal growth and development' rather than financial reasons. 2. The interval for rotations of duty stations was found to be mostly irregular(56.4%) while others reported as weekly(16.1%), monthly(12.9%), and fixed terms(4.6%). 3. The main problems related to duty shifts particularly the evening and night duty nurses reported were "not enough time for the family, " "afraid of security problems after the work when returning home late at night." and "lack of leisure time". "problems in physical and physiological adjustment." "problems in family life." "lack of time for interactions with fellow nurses" etc. 4. The forty percent of respondents reported to have '1-2 times' of duty shift rotations while all others reported that '0 time'. '2-3 times'. 'more than 3 times' etc. which suggest the irregularity in duty shift rotations. 5. The majority(62.8%) of study population found to favor the rotating system of duty stations. The reasons for favoring the rotation system were: the opportunity for "learning new things and personal development." "better human relations are possible. "better understanding in various duty stations." "changes in monotonous routine job" etc. The proportion of those disfavor the rotating 'system was 34.7 percent. giving the reasons of"it impedes development of specialization." "poor job performances." "stress factors" etc. Furthermore. respondents made the following comments in relation to the rotation of duty stations: the nurses should be given the opportunity to participate in the. decision making process: personal interest and aptitudes should be considered: regular intervals for the rotations or it should be planned in advance. etc. 6. For the future career plan. the older. married group with longer nursing experiences appeared to think the nursing as their lifetime career more likely than the younger. single group with shorter nursing experiences ($x^2=61.19.{\;}p=.000;{\;}x^2=41.55.{\;}p=.000$). The reason given for their future career plan regardless of length of future service, was predominantly "personal growth and development" rather than financial reasons. For further analysis, the group those with the shorter career plan appeared to claim "financial reasons" for their future career more readily than the group who consider the nursing job as their lifetime career$(x^2$= 11.73, p=.003) did. This finding suggests the need for careful .considerations in personnel management of nursing administration particularly when dealing with the nurses' career development. The majority of respondents preferred the fixed day shift. However, further analysis of those preferred evening shift by age and civil status, "< 25 years group"(15.1%) and "single group"(13.2) were more likely to favor the fixed evening shift than > 25 years(6.4%) and married(4.8%)groups. This differences were statistically significant ($x^2=14.54, {\;}p=.000;{\;}x^2=8.75, {\;}p=.003$). 7. A great majority of respondents(86.9% or n=647) found to prefer the day shifts. When the four different types of duty shifts(Types A. B. C, D) were presented, 55.0 percent of total respondents preferred the A type or the existing one followed by D type(22.7%). B type(12.4%) and C type(8.2%). 8. When the condition of monetary incentives for the evening(20% of salary) and night shifts(40% of. salary) of the existing duty type was presented. again the day shift appeared to be the most preferred one although the rate was slightly lower(66.4% against 86.9%). In the case of evening shift, with the same incentive, the preference rates for evening and night shifts increased from 11.0 to 22.4 percent and from 0.5 to 3.0 percent respectively. When the age variable was controlled. < 25 yrs group showed higher rates(31.6%. 4.8%) than those of > 25 yrs group(15.5%. 1.3%) respectively preferring the evening and night shifts(p=.000). The civil status also seemed to operate on the preferences of the duty shifts as the single group showed lower rate(69.0%) for day duty against 83. 6% of the married group. and higher rates for evening and night duties(27.2%. 15.1%) respectively against those of the married group(3.8%. 1.8%) while a higher proportion of the married group(83. 6%) preferred the day duties than the single group(69.0%). These differences were found to be statistically all significant(p=.001). 9. The findings on preferences of three different types of fixed duty hours namely, B, C. and D(with additional monetary incentives) are as follows in order of preference: B type(12hrs a day, 3days a wk): day shift(64.1%), evening shift(26.1%). night shift(6.5%) C type(12hrs a day. 4days a wk) : evening shift(49.2%). day shift(32.8%), night shift(11.5%) D type(10hrs a day. 4days a wk): showed the similar trend as B type. The findings of higher preferences on the evening and night duties when the incentives are given. as shown above, suggest the need for the introductions of different patterns of duty hours and incentive measures in order to overcome the difficulties in rostering the nursing duties. However, the interpretation of the above data, particularly the C type, needs cautions as the total number of respondents is very small(n=61). It requires further in-depth study. In conclusion. it seemed to suggest that the patterns of nurses duty hours and shifts in the most hospitals in the country have neither been tried for different duty types nor been flexible. The stereotype rostering system of three shifts and insensitiveness for personal life aspect of nurses seemed to be prevailing. This study seems to support that irregular and frequent rotations of duty shifts may be contributing factors for most nurses' maladjustment problems in physical and mental health. personal and family life which eventually may result in high turnover rates. In order to overcome the increasing problems in personnel management of hospital nurses particularly in rostering of evening and night duty shifts, which may related to eventual high turnover rates, the findings of this study strongly suggest the need for an introduction of new rostering systems including fixed duties and appropriate incentive measures for evenings and nights which the most nurses want to avoid, In considering the nursing care of inpatients is the round-the clock business. the practice of the nursing duty shift system is inevitable. In this context, based on the findings of this study. the following are recommended: 1. The further in-depth studies on duty shifts and hours need to be undertaken for the development of appropriate and effective rostering systems for hospital nurses. 2. An introduction of appropriate incentive measures for evening and night duty shifts along with organizational considerations such as the trials for preferred duty time bands, duty hours, and fixed duty shifts should be considered if good quality of care for the patients be maintained for the round the clock. This may require an initiation of systematic research and development activities in the field of hospital nursing administration as a part of permanent system in the hospital. 3. Planned and regular intervals, orientation and training, and professional and personal growth should be considered for the rotation of different duty stations or units. 4. In considering the higher degree of preferences in the duty type of "10hours a day, 4days a week" shown in this study, it would be worthwhile to undertake the R&D type studies in large hospital settings.

    • PDF

    Studies on the Consumptine Use of Irrigated Water in Paddy Fields During the Growing of Rice Plants(III) (벼생유기간중의 논에서의 분석소비에 관한 연구(II))

    • 민병섭
      • Magazine of the Korean Society of Agricultural Engineers
      • /
      • v.11 no.4
      • /
      • pp.1775-1782
      • /
      • 1969
    • The results of the study on the consumptine use of irrigated water in paddy fields during the growing season of rice plants are summarized as follows. 1. Transpiration and evaporation from water surface. 1) Amount of transpiration of rice plant increases gradually after transplantation and suddenly increases in the head swelling period and reaches the peak between the end of the head swelling poriod and early period of heading and flowering. (the sixth period for early maturing variety, the seventh period for medium or late maturing varieties), then it decreases gradually after that, for early, medium and late maturing varieties. 2) In the transpiration of rice plants there is hardly any difference among varieties up to the fifth period, but the early maturing variety is the most vigorous in the sixth period, and the late maturing variety is more vigorous than others continuously after the seventh period. 3) The amount of transpiration of the sixth period for early maturing variety of the seventh period for medium and late maturing variety in which transpiration is the most vigorous, is 15% or 16% of the total amount of transpiration through all periods. 4) Transpiration of rice plants must be determined by using transpiration intensity as the standard coefficient of computation of amount of transpiration, because it originates in the physiological action.(Table 7) 5) Transpiration ratio of rice plants is approximately 450 to 480 6) Equations which are able to compute amount of transpiration of each variety up th the heading-flowering peried, in which the amount of transpiration of rice plants is the maximum in this study are as follows: Early maturing variety ; Y=0.658+1.088X Medium maturing variety ; Y=0.780+1.050X Late maturing variety ; Y=0.646+1.091X Y=amount of transpiration ; X=number of period. 7) As we know from figure 1 and 2, correlation between the amount evaporation from water surface in paddy fields and amount of transpiration shows high negative. 8) It is possible to calculate the amount of evaporation from the water surface in the paddy field for varieties used in this study on the base of ratio of it to amount of evaporation by atmometer(Table 11) and Table 10. Also the amount of evaporation from the water surface in the paddy field is to be computed by the following equations until the period in which it is the minimum quantity the sixth period for early maturing variety and the seventh period for medium or late maturing varieties. Early maturing variety ; Y=4.67-0.58X Medium maturing variety ; Y=4.70-0.59X Late maturing variety ; Y=4.71-0.59X Y=amount of evaporation from water surface in the paddy field X=number of period. 9) Changes in the amount of evapo-transpiration of each growing period have the same tendency as transpiration, and the maximum quantity of early maturing variety is in the sixth period and medium or late maturing varieties are in the seventh period. 10) The amount of evapo-transpiration can be calculated on the base of the evapo-transpiration intensity (Table 14) and Tablet 12, for varieties used in this study. Also, it is possible to compute it according to the following equations with in the period of maximum quantity. Early maturing variety ; Y=5.36+0.503X Medium maturing variety ; Y=5.41+0.456X Late maturing variety ; Y=5.80+0.494X Y=amount of evapo-transpiration. X=number of period. 11) Ratios of the total amount of evapo-transpiration to the total amount of evaporation by atmometer through all growing periods, are 1.23 for early maturing variety, 1.25 for medium maturing variety, 1.27 for late maturing variety, respectively. 12) Only air temperature shows high correlation in relation between amount of evapo-transpiration and climatic conditions from the viewpoint of Korean climatic conditions through all growing periods of rice plants. 2. Amount of percolation 1) The amount of percolation for computation of planning water requirment ought to depend on water holding dates. 3. Available rainfall 1) The available rainfall and its coefficient of each period during the growing season of paddy fields are shown in Table 8. 2) The ratio (available coefficient) of available rainfall to the amount of rainfall during the growing season of paddy fields seems to be from 65% to 75% as the standard in Korea. 3) Available rainfall during the growing season of paddy fields in the common year is estimated to be about 550 millimeters. 4. Effects to be influenced upon percolation by transpiration of rice plants. 1) The stronger absorbtive action is, the more the amount of percolation decreases, because absorbtive action of rice plant roots influence upon percolation(Table 21, Table 22) 2) In case of planting of rice plants, there are several entirely different changes in the amount of percolation in the forenoon, at night and in the afternoon during the growing season, that is, is the morning and at night, the amount of percolation increases gradually after transplantation to the peak in the end of July or the early part of August (wast or soil temperature is the highest), and it decreases gradually after that, neverthless, in the afternoon, it decreases gradually after transplantation to be at the minimum in the middle of August, and it increases gradually after that. 3) In spite of the increasing amount of transpiration, the amount of daytime percolation decreases gadually after transplantation and appears to suddenly decrease about head swelling dates or heading-flowering period, but it begins to increase suddenly at the end of August again. 4) Changs of amount of percolation during all growing periods show some variable phenomena, that is, amount of percolation decreases after the end of July, and it increases in end August again, also it decreases after that once more. This phenomena may be influenced complexly from water or soil temperature(night time and forenoon) as absorbtive action of rice plant roots. 5) Correlation between the amount of daytime percolation and the amount of transpiration shows high negative, amount of night percolation is influenced by water or soil temperature, but there is little no influence by transpiration. It is estimated that the amount of a daily percolation is more influenced by of other causes than transpiration. 6) Correlation between the amount of night percoe, lation and water or soil temp tureshows high positive, but there is not any correlation between the amount of forenoon percolation or afternoon percolation and water of soil temperature. 7) There is high positive correlation which is r=+0.8382 between the amount of daily percolation of planting pot of rice plant and amount and amount of daily percolation of non-planting pot. 8) The total amount of percolation through all growin. periods of rice plants may be influenced more from specific permeability of soil, water of soil temperature, and otheres than transpiration of rice plants.

    • PDF

    DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

    • 박만배
      • Proceedings of the KOR-KST Conference
      • /
      • 1995.02a
      • /
      • pp.101-113
      • /
      • 1995
    • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.