• Title/Summary/Keyword: 기초성능

Search Result 2,836, Processing Time 0.037 seconds

Comparison of rainfall-runoff performance based on various gridded precipitation datasets in the Mekong River basin (메콩강 유역의 격자형 강수 자료에 의한 강우-유출 모의 성능 비교·분석)

  • Kim, Younghun;Le, Xuan-Hien;Jung, Sungho;Yeon, Minho;Lee, Gihae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.2
    • /
    • pp.75-89
    • /
    • 2023
  • As the Mekong River basin is a nationally shared river, it is difficult to collect precipitation data, and the quantitative and qualitative quality of the data sets differs from country to country, which may increase the uncertainty of hydrological analysis results. Recently, with the development of remote sensing technology, it has become easier to obtain grid-based precipitation products(GPPs), and various hydrological analysis studies have been conducted in unmeasured or large watersheds using GPPs. In this study, rainfall-runoff simulation in the Mekong River basin was conducted using the SWAT model, which is a quasi-distribution model with three satellite GPPs (TRMM, GSMaP, PERSIANN-CDR) and two GPPs (APHRODITE, GPCC). Four water level stations, Luang Prabang, Pakse, Stung Treng, and Kratie, which are major outlets of the main Mekong River, were selected, and the parameters of the SWAT model were calibrated using APHRODITE as an observation value for the period from 2001 to 2011 and runoff simulations were verified for the period form 2012 to 2013. In addition, using the ConvAE, a convolutional neural network model, spatio-temporal correction of original satellite precipitation products was performed, and rainfall-runoff performances were compared before and after correction of satellite precipitation products. The original satellite precipitation products and GPCC showed a quantitatively under- or over-estimated or spatially very different pattern compared to APHPRODITE, whereas, in the case of satellite precipitation prodcuts corrected using ConvAE, spatial correlation was dramatically improved. In the case of runoff simulation, the runoff simulation results using the satellite precipitation products corrected by ConvAE for all the outlets have significantly improved accuracy than the runoff results using original satellite precipitation products. Therefore, the bias correction technique using the ConvAE technique presented in this study can be applied in various hydrological analysis for large watersheds where rain guage network is not dense.

Behavior Analysis of Concrete Structure under Blast Loading : (I) Experiment Procedures (폭발하중을 받는 콘크리트 구조물의 실험적 거동분석 : (I) 실험수행절차)

  • Yi, Na Hyun;Kim, Sung Bae;Kim, Jang-Ho Jay;Choi, Jong Kwon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.5A
    • /
    • pp.557-564
    • /
    • 2009
  • In recent years, there have been numerous explosion-related accidents due to military and terrorist activities. Such incidents caused not only damages to structures but also human casualties, especially in urban areas. To protect structures and save human lives against explosion accidents, better understanding of the explosion effect on structures is needed. In an explosion, the blast overpressure is applied to concrete structures as an impulsive load of extremely short duration with very high pressure and heat. Generally, concrete is known to have a relatively high blast resistance compared to other construction materials. However, information and test results related to the blast experiment of internal and external have been limited due to military and national security reasons. Therefore, in this paper, to evaluate blast effect on reinforced have concrete structure and its protective performance, blast tests are carried out with $1.0m{\times}1.0m{\times}150mm$ reinforce concrete slab structure at the Agency for Defence Development. The standoff blast distance is 1.5 m and the preliminary tests consists with TNT 9 lbs and TNT 35 lbs and the main tests used ANFO 35 lbs. It is the first ever blast experiment for nonmilitary purposes domestically. In this paper, based on the basic experiment procedure and measurement details for acquiring structural behavior data, the blast experimental measurement system and procedure are established details. The procedure of blast experiments are based on the established measurement system which consists of sensor, signal conditioner, DAQ system, software. It can be used as basic research references for related research areas, which include protective design and effective behavior measurements of structure under blast loading.

Dynamical Study on the Blasting with One-Free-Face to Utilize AN-FO Explosives (초유폭약류(硝油爆藥類)를 활용(活用)한 단일자유면발파(單一自由面發破)의 역학적(力學的) 연구(硏究))

  • Huh, Ginn
    • Economic and Environmental Geology
    • /
    • v.5 no.4
    • /
    • pp.187-209
    • /
    • 1972
  • Drilling position is one of the most important factors affecting on the blasting effects. There has been many reports on several blasting factors of burn-cut by Messrs. Brown and Cook, but in this study the author tried to compare drilling positions of burn-cut to pyramid-cut, and also to correlate burn-cut effects of drilling patterns, not being dealt by Prof. Ito in his theory, which emphasized on dynamical stress analysis between explosion and free face. According to former theories, there break out additional tensile stress reflected at the free face supplemented to primary compressive stress on the blasting with one-free-face. But with these experimented new drilling patterns of burn-cut, more free faces and nearer distance of each drilling holes make blasting effects greater than any other methods. To promote the above explosive effect rationary, it has to be considered two important categories under-mentioned. First, unloaded hole in the key holes should be drilled in wider diameter possibly so that it breaks out greater stress relief. Second, key holes possibly should have closer distances each other to result clean blasting. These two important factors derived from experiments with, theories of that the larger the dia of the unloaded hole, it can be allowed wider secondary free faces and closes distances of each holes make more developed stress relief, between loaded and unloaded holes. It was suggested that most ideal distance between holes is about 4 clearance in U. S. A., but the author, according to the experiments, it results that the less distance allow, the more effective blasting with increased broken rock volume and longer drifted length can be accomplished. Developed large hole burn-cut method aimed to increase drifting length technically under the above considerations, and progressive success resulted to achieve maximum 7 blasting cycles per day with 3.1m drifting length per cycle. This achievement originated high-speed-drifting works, and it was also proven that application of Metallic AN-FO on large hole burn-cut method overcomes resistance of one-free-face. AN-FO which was favored with low price and safety handling is the mixture of the fertilizer or industrial Ammonium-Nitrate and fuel oil, and it is also experienced that it shows insensible property before the initiation, but once it is initiated by the booster, it has equal explosive power of Ammonium Nitrate Explosives (ANE). There was many reports about AN-FO. On AN-FO mixing ratio, according to these experiments, prowdered AN-FO, 93.5 : 6.5 and prilled AN-FO 94 : 6, are the best ratios. Detonation, shock, and friction sensities are all more insensitive than any other explosives. Residual gas is not toxic, too. On initation and propagation of the detonation test, prilled AN-FO is more effective than powered AN-FO. AN-FO has the best explosion power at 7 days elapsed after it has mixed. While AN-FO was used at open pit in past years prior to other conditions, the author developed new improved explosives, Metallic AN-FO and Underwater explosive, based on the experiments of these fundmental characteristics by study on its usage utilizing AN-FO. Metallic AN-FO is the mixture of AN-FO and Al, Fe-Si powder, and Underwater explosive is made from usual explosive and AN-FO. The explanations about them are described in the other paper. In this study, it is confirmed that the blasting effects of utilizing AN-FO explosives are very good.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

Study on PM10, PM2.5 Reduction Effects and Measurement Method of Vegetation Bio-Filters System in Multi-Use Facility (다중이용시설 내 식생바이오필터 시스템의 PM10, PM2.5 저감효과 및 측정방법에 대한 연구)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.5
    • /
    • pp.80-88
    • /
    • 2020
  • With the issuance of one-week fine dust emergency reduction measures in March 2019, the public's anxiety about fine dust is increasingly growing. In order to assess the application of air purifying plant-based bio-filters to public facilities, this study presented a method for measuring pollutant reduction effects by creating an indoor environment for continuous discharge of particle pollutants and conducted basic studies to verify whether indoor air quality has improved through the system. In this study conducted in a lecture room in spring, the background concentration was created by using mosquito repellent incense as a pollutant one hour before monitoring. Then, according to the schedule, the fine dust reduction capacity was monitored by irrigating for two hours and venting air for one hour. PM10, PM2.5, and temperature & humidity sensors were installed two meters front of the bio-filters, and velocity probes were installed at the center of the three air vents to conduct time-series monitoring. The average face velocity of three air vents set up in the bio-filter was 0.38±0.16 m/s. Total air-conditioning air volume was calculated at 776.89±320.16㎥/h by applying an air vent area of 0.29m×0.65m after deducing damper area. With the system in operation, average temperature and average relative humidity were maintained at 21.5-22.3℃, and 63.79-73.6%, respectively, which indicates that it satisfies temperature and humidity range of various conditions of preceding studies. When the effects of raising relatively humidity rapidly by operating system's air-conditioning function are used efficiently, it would be possible to reduce indoor fine dust and maintain appropriate relative humidity seasonally. Concentration of fine dust increased the same in all cycles before operating the bio-filter system. After operating the system, in cycle 1 blast section (C-1, β=-3.83, β=-2.45), particulate matters (PM10) were lowered by up to 28.8% or 560.3㎍/㎥ and fine particulate matters (PM2.5) were reduced by up to 28.0% or 350.0㎍/㎥. Then, the concentration of find dust (PM10, PM2.5) was reduced by up to 32.6% or 647.0㎍/㎥ and 32.4% or 401.3㎍/㎥ respectively through reduction in cycle 2 blast section (C-2, β=-5.50, β=-3.30) and up to 30.8% or 732.7㎍/㎥ and 31.0% or 459.3㎍/㎥ respectively through reduction in cycle 3 blast section (C-3, β=5.48, β=-3.51). By referring to standards and regulations related to the installation of vegetation bio-filters in public facilities, this study provided plans on how to set up objective performance evaluation environment. By doing so, it was possible to create monitoring infrastructure more objective than a regular lecture room environment and secure relatively reliable data.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Development of Korean Version of Heparin-Coated Shunt (헤파린 표면처리된 국산화 혈관우회도관의 개발)

  • Sun, Kyung;Park, Ki-Dong;Baik, Kwang-Je;Lee, Hye-Won;Choi, Jong-Won;Kim, Seung-Chol;Kim, Taik-Jin;Lee, Seung-Yeol;Kim, Kwang-Taek;Kim, Hyoung-Mook;Lee, In-Sung
    • Journal of Chest Surgery
    • /
    • v.32 no.2
    • /
    • pp.97-107
    • /
    • 1999
  • Background: This study was designed to develop a Korean version of the heparin-coated vascular bypass shunt by using a physical dispersing technique. The safety and effectiveness of the thrombo-resistant shunt were tested in experimental animals. Material and Method: A bypass shunt model was constructed on the descending thoracic aorta of 21 adult mongrel dogs(17.5-25 kg). The animals were divided into groups of no-treatment(CONTROL group; n=3), no-treatment with systemic heparinization(HEPARIN group; n=6), Gott heparin shunt (GOTT group; n=6), or Korean heparin shunt(KIST group; n=6). Parameters observed were complete blood cell counts, coagulation profiles, kidney and liver function(BUN/Cr and AST/ ALT), and surface scanning electron microscope(SSEM) findings. Blood was sampled from the aortic blood distal to the shunt and was compared before the bypass and at 2 hours after the bypass. Result: There were no differences between the groups before the bypass. At bypass 2 hours, platelet level increased in the HEPARIN and GOTT groups(p<0.05), but there were no differences between the groups. Changes in other blood cell counts were insignificant between the groups. Activated clotting time, activated partial thromboplastin time, and thrombin time were prolonged in the HEPARIN group(p<0.05) and differences between the groups were significant(p<0.005). Prothrombin time increased in the GOTT group(p<0.05) without having any differences between the groups. Changes in fibrinogen level were insignificant between the groups. Antithrombin III levels were increased in the HEPARIN and KIST groups(p<0.05), and the inter-group differences were also significant(p<0.05). Protein C level decreased in the HEPARIN group(p<0.05) without having any differences between the groups. BUN levels increased in all groups, especially in the HEPARIN and KIST groups(p<0.05), but there were no differences between the groups. Changes of Cr, AST, and ALT levels were insignificant between the groups. SSEM findings revealed severe aggregation of platelets and other cellular elements in the CONTROL group, and the HEPARIN group showed more adherence of the cellular elements than the GOTT or KIST group. Conclusion: Above results show that the heparin-coated bypass shunts(either GOTT or KIST) can suppress thrombus formation on the surface without inducing bleeding tendencies, while systemic heparinization(HEPARIN) may not be able to block activation of the coagulation system on the surface in contact with foreign materials but increases the bleeding tendencies. We also conclude that the thrombo-resistant effects of the Korean version of heparin shunt(KIST) are similar to those of the commercialized heparin shunt(GOTT).

  • PDF

Study on the Tractive Characteristics of the Seed Furrow Opener for No-till Planter (무경운(無耕耘) 파종기용(播種機用) 구체기(溝切器)의 견인특성(牽引特性)에 관(關)한 연구(硏究))

  • La, Woo-Jung
    • Korean Journal of Agricultural Science
    • /
    • v.5 no.2
    • /
    • pp.149-157
    • /
    • 1978
  • This study was carried out to obtain basic data for the type selection of furrow openers for the no-tillage soybean planter trailed by the two-wheel tractor from the standpoint of minimum draft and good performance of furrowing. For this study, two models of furrow opener, hoe and disc type, were tested on the artificial soil stuffed in the moving soil bin. The results obtained were as follows. In the case of disc furrow opener, the drafts were measured according to various diameters of discs under the condition of disc angle $8^{\circ}$ and $16^{\circ}$, working depth 3cm and 6cm, working speed 2.75cm/sec. Minimum draft appeared when the diameter of disc was about 28cm and the drafts increased as the diameter of discs became larger or smaller than this diameter. Specific draft showed almost same tendencies as above but showed the minimum when the diameter was about 30cm. For the purpose of controlling the seeding depth, the relationships between draft and working depths, 3cm and 6cm, were tested. The variations of draft concerning the various working depths showed linear changes and were affected in higher degree by depths than other factors. There were general tendencies at both working depths, 3cm and 6cm, that total draft showed the minimum with the disc diameter of about 28cm and specific draft showed it with the disc diameter of about 30cm regardless of disc angle and working speed. For the purpose of controlling the working width and speed, the relationships among drafts, disc angle and working speed were investigated and there were general tendencies that the draft increased as the angle and speed were increased and the draft was affected more by speed than by angle. To compare the hoe-type with disc-type opener, the specific drafts of hoe openers were compared with those of disc opener of $16^{\circ}$ angle and 30cm diameter. The specific draft of disc-type opener with the diameter of 30cm was $0.35{\sim}0.5kg/cm^2$, while $0.71{\sim}1.02kg/cm^2$ in the case of hoe type with the lift angle of $20^{\circ}$ which is 2 times as much as that of disc type in average value. And the furrows opened by disc openers were cleaner than those opened by hoe openers.

  • PDF