• Title/Summary/Keyword: 인공 결함

Search Result 5,909, Processing Time 0.04 seconds

Studies on the Life History of Bacciger harengulae (Bacciger harengulae의 생활사에 관한 연구)

  • KIM Young-Gill;CHUN Seh-Kyu
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.17 no.5
    • /
    • pp.449-470
    • /
    • 1984
  • The cercaria of Bacciger herengulae which is parasitized on the gonad of Solen strictus was investigated in order to reveal its entire life history. The area covered for the study was in the vicinity sea of Naechodo, the estuary of the Kum river in the western coast of Korea during the period of 1980-1983. Morphology and development as well as infection rates of sporocyst and cercaria within Solen strictus were examined. For accomplishing the objectives of this study, an artificial infection experiment and some investigations on the second intermediate host, the final host and the growing stages were also studied in both laboratory and natural habitat of Solen strictus. According to the study, it was revealed that the first intermediate hosts were Meretrix lusoria, Solen strictus, Tapes japonica and Laternula limicola, the second intermediate host was Palaemon (Exopalaemon) carinicauda and the final hosts were Konosirus punctatus and Harengula zunasi. A mature sporocyst which was found in the gonad of Solen strictus was $4.0-4.3{\times}0.2-0.21\;mm$ insize, and the cercaia with 27 pairs of setae, each seta consisting of 6 tufts, was $270{\times}147{\mu}m$ in body size and $550{\times}52{\mu}m$ in tail size. Oral sucker($52{\times}42{\mu}m$), pharynx, vental sucker and two testese were obviously seen within the cercaria. The excretory vesicles of cercaria were in V-shape and the flame cell were formula was expressed as 2[(3+3)+(3+3)]=24. The infection of cercaria in the first intermediate host, Solen strictus, was found throughout the year regardlless of the water temperature, and its mean infection rate was $9.67\%$ during the study period. The infection rate fluctuated with temperature, the highest being $28.0\%\;at\;28.0^{\circ}C$ water temperature in July and the lowest $2.4\%\;at\;19.5^{\circ}C$ in October, and it increased in proportion to the shell length on the host. But cercaria was not detected at below 4.0 cm in size of the host. Mature cercariae were found 6 months from May to October when water temperature was above $19.5^{\circ}C$. On the other hand, when water temperature was below $19.5^{\circ}C$, only immature cercariae and sporocysts were found. The cercariae were active for 35 hours and survived for 71 hours at $20^{\circ}C$, and 29 and 34 hours at $25^{\circ}C$ respectively, whereas the cercariae were inactive at less than $20^{\circ}C$ in water temperature. Cercaria, from Solen strictus, approached shrimp of 1-3 cm in body length as its second host. Then, it began to intrude in to the muscle of shrimp after 2-3 hours. The infected cercaria formed cyst after 7-8 hours, and became mature metacercaria. $420{\times}310{\mu}m$ in size, 15 days afer infection. The infection rate of metaceria to shrimp in the laboratory was highest, at $25^{\circ}C$ being $61\%$ and at $20^{\circ}C\;17%$. The infection rate of metacearia in shrimp was highest in the first abdominal segment, followed by cephalothorax, the second, and fifth abdominal segments, and in that order. Also, the infection rate of metacercaria in wild shrimp was high $9.6-11.1\%$ at $26.5^{\circ}C$ in June, and low $1.56-2.5\%$ at $28-29.5^{\circ}C$ from July to August. The infected shrimp with metacercaria was experimentally fed to Konosirus punctatus in the laboratory in order to know its final host. The metacercaria developed into the adult worm, $440-520{\times}310-360{\mu}m$ in size, within the intestine of Konosirus punctatus 20 days after infection. The adult worm was oval shape and $20-24{\times}11-20{\mu}m$ in size. The infection rate of adult worm to Konosirus punctatus and Harengula zunasi ranged 87.3 to $100\%$, the mean being $95.2\%$, regardless of the body length of their hosts. The infection rate was $100\%$ in June and July, but it decreased in September and October. The size and body structure of the trematode observed during the present study were well agreed with those ievestigated by Yamaguti(1938), thus, it may be concluded that the adult worm it identified as Bacciger harengulae.

  • PDF

A Study on Forest Insurance (산림보험(山林保險)에 관한 연구(硏究))

  • Park, Tai Sik
    • Journal of Korean Society of Forest Science
    • /
    • v.15 no.1
    • /
    • pp.1-38
    • /
    • 1972
  • 1. Objective of the Study The objective of the study was to make fundamental suggestions for drawing a forest insurance system applicable in Korea by investigating forest insurance systems undertaken in foreign countries, analyzing the forest hazards occurred in entire forests of Korea in the past, and hearing the opinions of people engaged in forestry. 2. Methods of the Study First, reference studies on insurance at large as well as on forest insurance were intensively made to draw the characteristics of forest insurance practiced in main forestry countries, Second, the investigations of forest hazards in Korea for the past ten years were made with the help of the Office of Forestry. Third, the questionnaires concerning forest insurance were prepared and delivered at random to 533 personnel who are working at different administrative offices of forestry, forest stations, forest cooperatives, colleges and universities, research institutes, and fire insurance companies. Fourth, fifty three representative forest owners in the area of three forest types (coniferous, hardwood, and mixed forest), a representative region in Kyonggi Province out of fourteen collective forest development programs in Korea, were directly interviewed with the writer. 3. Results of the Study The rate of response to the questionnaire was 74.40% as shown in the table 3, and the results of the questionaire were as follows: (% in the parenthes shows the rates of response; shortages in amount to 100% were due to the facts of excluding the rates of response of minor respondents). 1) Necessity of forest insurance The respondents expressed their opinions that forest insurance must be undertaken to assure forest financing (5.65%); for receiving the reimbursement of replanting costs in case of damages done (35.87%); and to protect silvicultural investments (46.74%). 2) Law of forest insurance Few respondents showed their views in favor of applying the general insurance regulations to forest insurance practice (9.35%), but the majority of respondents were in favor of passing a special forest insurance law in the light of forest characteristics (88.26%). 3) Sorts of institutes to undertake forest insurance A few respondents believed that insurance companies at large could take care of forest insurance (17.42%); forest owner's mutual associations would manage the forest insurance more effectively (23.53%); but the more than half of the respondents were in favor of establishing public or national forest insurance institutes (56.18%). 4) Kinds of risks to be undertaken in forest insurance It would be desirable that the risks to be undertaken in forest insurance be limited: To forest fire hazards only (23.38%); to forest fire hazards plus damages made by weather (14.32%); to forest fire hazards, weather damages, and insect damages (60.68%). 5) Objectives to be insured It was responded that the objectives to be included in forest insurance should be limited: (1) To artificial coniferous forest only (13.47%); (2) to both coniferous and broad-leaved artificial forests (23.74%); (3) but the more than half of the respondents showed their desire that all the forests regardless of species and the methods of establishment should be insured (61.64%). 6) Range of risks in age of trees to be included in forest insurance The opinions of the respondents showed that it might be enough to insure the trees less than ten years of age (15.23%); but it would be more desirous of taking up forest trees under twenty years of age (32.95%); nevertheless, a large number of respondents were in favor of underwriting all the forest trees less than fourty years of age (46.37%). 7) Term of a forest insurance contract Quite a few respondents favored a contract made on one year basis (31.74%), but the more than half of the respondents favored the contract made on five year bases (58.68%). 8) Limitation in a forest insurance contract The respondents indicated that it would be desirable in a forest insurance contract to exclude forests less than five hectars (20.78%), but more than half of the respondents expressed their opinions that forests above a minimum volume or number of trees per unit area should be included in a forest insurance contract regardless of the area of forest lands (63.77%). 9) Methods of contract Some responded that it would be good to let the forest owners choose their forests in making a forest insurance contract (32.13%); others inclined to think that it would be desirable to include all the forests that owners hold whenerver they decide to make a forest insurance contract (33.48%); the rest responded in favor of forcing the owners to buy insurance policy if they own the forests that were established with subsidy or own highly vauable growing stock (31.92%) 10) Rate of premium The responses were divided into three categories: (1) The rate of primium is to be decided according to the regional degree of risks(27.72%); (2) to be decided by taking consideration both regional degree of risks and insurable values(31.59%); (3) and to be decided according to the rate of risks for the entire country and the insurable values (39.55%). 11) Payment of Premium Although a few respondents wished to make a payment of premium at once for a short term forest insurance contract, and an annual payment for a long term contract (13.80%); the majority of the respondents wished to pay the premium annually regardless of the term of contract, by employing a high rate of premium on a short term contract, but a low rate on a long term contract (83.71%). 12) Institutes in charge of forest insurance business A few respondents showed their desire that forest insurance be taken care of at the government forest administrative offices (18.75%); others at insurance companies (35.76%); but the rest, the largest number of the respondents, favored forest associations in the county. They also wanted to pay a certain rate of premium to the forest associations that issue the insurance (44.22%). 13) Limitation on indemnity for damages done In limitation on indemnity for damages done, the respondents showed a quite different views. Some desired compesation to cover replanting costs when young stands suffered damages and to be paid at the rate of eighty percent to the losses received when matured timber stands suffered damages(29.70%); others desired to receive compensation of the actual total loss valued at present market prices (31.07%); but the rest responded in favor of compensation at the present value figured out by applying a certain rate of prolongation factors to the establishment costs(36.99%). 14) Raising of funds for forest insurance A few respondents hoped to raise the fund for forest insurance by setting aside certain amount of money from the indemnity paid (15.65%); others wished to raise the fund by levying new forest land taxes(33.79%); but the rest expressed their hope to raise the fund by reserving certain amount of money from the surplus money that was saved due to the non-risks (44.81%). 15) Causes of fires The main causes of forest fires 6gured out by the respondents experience turned out to be (1) an accidental fire, (2) cigarettes, (3) shifting cultivation. The reponses were coincided with the forest fire analysis made by the Office of Forestry. 16) Fire prevention The respondents suggested that the most important and practical three kinds of forest fire prevention measures would be (1) providing a fire-break, (2) keeping passers-by out during the drought seasons, (3) enlightenment through mass communication systems. 4. Suggestions The writer wishes to present some suggestions that seemed helpful in drawing up a forest insurance system by reviewing the findings in the questionaire analysis and the results of investigations on forest insurance undertaken in foreign countries. 1) A forest insurance system designed to compensate the loss figured out on the basis of replanting cost when young forest stands suffered damages, and to strengthen credit rating by relieving of risks of damages, must be put in practice as soon as possible with the enactment of a specifically drawn forest insurance law. And the committee of forest insurance should be organized to make a full study of forest insurance system. 2) Two kinds of forest insurance organizations furnishing forest insurance, publicly-owned insurance organizations and privately-owned, are desirable in order to handle forest risks properly. The privately-owned forest insurance organizations should take up forest fire insurance only, and the publicly-owned ought to write insurance for forest fires and insect damages. 3) The privately-owned organizations furnishing forest insurance are desired to take up all the forest stands older than twenty years; whereas, the publicly-owned should sell forest insurance on artificially planted stands younger than twenty years with emphasis on compensating replanting costs of forest stands when they suffer damages. 4) Small forest stands, less than one hectare holding volume or stocked at smaller than standard per unit area are not to be included in a forest insurance writing, and the minimum term of insuring should not be longer than one year in the privately-owned forest insurance organizations although insuring period could be extended more than one year; whereas, consecutive five year term of insurance periods should be set as a mimimum period of insuring forest in the publicly-owned forest insurance organizations. 5) The forest owners should be free in selecting their forests in insuring; whereas, forest owners of the stands that were established with subsidy should be required to insure their forests at publicly-owned forest insurance organizations. 6) Annual insurance premiums for both publicly-owned and privately-owned forest insurance organizations ought to be figured out in proportion to the amount of insurance in accordance with the degree of risks which are grouped into three categories on the basis of the rate of risks throughout the country. 7) Annual premium should be paid at the beginning of forest insurance contract, but reduction must be made if the insuring periods extend longer than a minimum period of forest insurance set by the law. 8) The compensation for damages, the reimbursement, should be figured out on the basis of the ratio between the amount of insurance and insurable value. In the publicly-owned forest insurance system, the standard amount of insurance should be set on the basis of establishment costs in order to prevent over-compensation. 9) Forest insurance business is to be taken care of at the window of insurance com pnies when forest owners buy the privately-owned forest insurance, but the business of writing the publicly-owned forest insurance should be done through the forest cooperatives and certain portions of the premium be reimbursed to the forest cooperatives. 10) Forest insurance funds ought to be reserved by levying a property tax on forest lands. 11) In order to prevent forest damages, the forest owners should be required to report forest hazards immediately to the forest insurance organizations and the latter should bear the responsibility of taking preventive measures.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

THE EFFECTS OF THERMAL STIMULI TO THE FILLED TOOTH STRUCTURE (온도자극이 충전된 치질에 미치는 영향)

  • Baik, Byeong-Ju;Roh, Yong-Kwan;Lee, Young-Su;Yang, Jeong-Suk;Kim, Jae-Gon
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.26 no.2
    • /
    • pp.339-349
    • /
    • 1999
  • The dental structure substituted by restorative materials may produce discomfort resulting from hot or cold stimuli. To investigate the effects of this stimuli on the human teeth, thermal analysis was carried out by calculation of general heat conduction equation in a modeled tooth using numerical method. The method has been applied to axisymmetric and two-dimensional model, analyzing the effects of constant temperature $4^{\circ}C\;and\;60^{\circ}C$. That thermal shock was provided for 2 seconds and 4 seconds, respectively and recovered to normal condition of $20^{\circ}C$ until 10 seconds. The thermal behavior of tooth covered with a crown of gold or stainless steel was compared with that of tooth without crown. At the same time, the effects of restorative materials(amalgam, gold and zinc oxide-eugenol(ZOE)) on the temperature of PDJ(pulpo-dentinal junction) has been studied. The geometry used for thermal analysis so far has been limited to two-dimensional as well as axisymmetric tooth models. But the general restorative tooth forms a cross shaped cavity which is no longer two-dimensional and axisymmetric. Therefore, in this study, the three-dimensional model was developed to investigate the effect of shape and size of cavity. This three-dimensional model might be used for further research to investigate the effects of restorative materials and cavity design on the thermal behavior of the real shaped tooth. The results were as follows; 1. When cold temperature of $4^{\circ}C$ was applied to the surface of the restored teeth with amalgam for 2 seconds and recovered to ambient temperature of $20^{\circ}C$, the PDJ temperature decreased rapidly to $29^{\circ}C$ until 3 seconds and reached to $25^{\circ}C$ after 9 seconds. This temperature decreased rather slowly with stainless steel crown, but kept similar temperature within $1^{\circ}C$ differences. Using the gold as a restorative material, the PDJ temperature decreased very fast due to the high thermal conductivity and reached near to $25^{\circ}C$ but the temperature after 9 seconds was similar to that in the teeth without crown. The effects of coldness could be attenuated with the ZOE situated under the cavity. The low thermal conductivity caused a delay in temperature decrease and keeps $4^{\circ}C$ higher than the temperature of other conditions after 9 seconds. 2. The elapse time of cold stimuli was increased also until 4 seconds and recovered to $20^{\circ}C$ after 4 seconds to 9 seconds. The temperature after 9 seconds was about $2-3^{\circ}C$ lower than the temperature of 2 seconds stimuli, but in case of gold restoration, the high thermal conductivity of gold caused the minimum temperature of $21^{\circ}C$ after 5 seconds and got warm to $23^{\circ}C$ after 9 seconds. 3. The effects of hot stimuli was also investigated with the temperature of $60^{\circ}C$. For 2 seconds stimuli, the temperature increased to $40^{\circ}C$ from the initial temperature of $35^{\circ}C$ after 3 seconds of stimuli and decreased to $30^{\circ}C$ after 9 seconds in the teeth without crown. This temperature was sensitive to surface temperature in the teeth with gold restoration. It increased rapidly to $41^{\circ}C$ from the initial temperature of $35^{\circ}C$ after 2 seconds and decreased to $28^{\circ}C$ after 9 seconds, which showed $13^{\circ}C$ temperature variations for 9 seconds upon the surface temperature. This temperature variations were only in the range of $5^{\circ}C$ by using ZOE in the bottom of cavity and showed maximum temperature of $37^{\circ}C$ after 3 seconds of stimuli.

  • PDF

Weaning Following a 60 Minutes Spontaneous Breathing Trial (1시간 자가호흡관찰에 의한 기계적 호흡치료로부터의 이탈)

  • Park, Keon-Uk;Won, Kyoung-Sook;Koh, Young-Min;Baik, Jae-Jung;Chung, Yeon-Tae
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.3
    • /
    • pp.361-369
    • /
    • 1995
  • Background: A number of different weaning techniques can be employed such as spontaneous breathing trial, Intermittent mandatory ventilation(IMV) or Pressure support ventilation(PSV). However, the conclusive data indicating the superiority of one technique over another have not been published. Usually, a conventional spontaneous breathing trial is undertaken by supplying humidified $O_2$ through T-shaped adaptor connected to endotracheal tube or tracheostomy tube. In Korea, T-tube trial is not popular because the high-flow oxygen system is not always available. Also, the timing of extubation is not conclusive and depends on clinical experiences. It is known that to withdraw the endotracheal tube after weaning is far better than to go through any period. The tube produces varying degrees of resistance depending on its internal diameter and the flow rates encountered. The purpose of present study is to evaluate the effectiveness of weaning and extubation following a 60 minutes spontaneous breathing trial with simple oxygen supply through the endotracheal tube. Methods: We analyzed the result of weaning and extubation following a 60 minutes spontaneous breathing trial with simple oxygen supply through the endotracheal tube in 18 subjects from June, 1993 to June, 1994. They consisted of 9 males and 9 females. The duration of mechanical ventilation was from 38 hours to 341 hours(mean: $105.9{\pm}83.4$ hours). In all cases, the cause of ventilator dependency should be identified and precipitating factors should be corrected. The weaning trial was done when the patient became alert and arterial $O_2$ tension was adequate($PaO_2$ > 55mmHg) with an inspired oxygen fraction of 40%. We conducted a careful physical examination when the patient was breathing spontaneously through the endotracheal tube. Failure of weaning trial was signaled by cyanosis, sweating, paradoxical respiration, intercostal recession. Weaning failure was defined as the need for mechanical ventilation within 48 hours. Results: In 19 weaning trials of 18 patients, successful weaning and extubation was possible in 16/19(84.2 %). During the trial of spontaneous breathing for 60 minutes through the endotracheal tube, the patients who could wean developed slight increase in respiratory rates but significant changes of arterial blood gas values were not noted. But, the patients who failed weaning trial showed the marked increase in respiratory rates without significant changes of arterial blood gas values. Conclusion: The result of present study indicates that weaning from mechanical ventilation following a 60 minutes spontaneous breathing with $O_2$ supply through the endotracheal tube is a simple and effective method. Extubation can be done at the same time of successful weaning except for endobronchial toilet or airway protection.

  • PDF

Clinical Analysis of Repeated Heart Valve Replacement (심장판막치환술 후 재치환술에 관한 임상연구)

  • Kim, Hyuck;Nam, Seung-Hyuk;Kang, Jeong-Ho;Kim, Young-Hak;Lee, Chul-Burm;Chon, Soon-Ho;Shinn, Sung-Ho;Chung, Won-Sang
    • Journal of Chest Surgery
    • /
    • v.40 no.12
    • /
    • pp.817-824
    • /
    • 2007
  • Background: There are two choices for heart valve replacement-the use of a tissue valve and the use of a mechanical valve. Using a tissue valve, additional surgery will be problematic due to valve degeneration. If the risk of additional surgery could be reduced, the tissue valve could be more widely used. Therefore, we analyzed the risk factors and mortality of patients undergoing repeated heart valve replacement and primary replacement. Material and Method: We analyzed 25 consecutive patients who underwent repeated heart valve replacement and 158 patients who underwent primary heart valve replacement among 239 patients that underwent heart vale replacement in out hospital from January 1995 to December 2004. Result: There were no differences in age, sex, and preoperative ejection fraction between the repeated valve replacement group of patients and the primary valve replacement group of patients. In the repeated valve replacement group, the previously used artificial valves were 3 mechanical valves and 23 tissue valves. One of these cases had simultaneous replacement of the tricuspid and aortic valve with tissue valves. The mean duration after a previous operation was 92 months for the use of a mechanical valve and 160 months for the use of a tissue valve. The mean cardiopulmonary bypass time and aortic cross clamp time were 152 minutes and 108 minutes, respectively, for the repeated valve replacement group of patients and 130 minutes and 89 minutes, respectively, for the primary valve replacement group of patients. These results were statistically significant. The use of an intra aortic balloon pump (IABP) was required for 2 cases (8%) in the repeated valve replacement group of patients and 6 cases (3.8%) in the primary valve replacement group of patients. An operative death occurred in one case (4%) in the repeated valve replacement group of patients and occurred in nine cases (5.1%) in the primary valve replacement group of patients. Among postoperative complications, the need for mechanical ventilation over 48 hours was different between the two groups. The mean follow up period after surgery was $6.5{\pm}3.2$ years. The 5-year survival of patients in the repeated valve replacement group was 74% and the 5-year survival of patients in the primary valve replacement group was 95%. Conclusion: The risk was slightly increased, but there was little difference in mortality between the repeated and primary heart valve replacement group of patients. Therefore, it is necessary to reconsider the issue of avoiding the use of a tissue valve due to the risk of additional surgery, and it is encouraged to use the tissue valve selectively, which has several advantages over the use of a mechanical valve. In the case of a repeated replacement, however, the mortality rate was high for a patient whose preoperative status was not poor. A proper as sessment of cardiac function and patient status is required after the primary valve replacement. Subsequently, a secondary replacement could then be considered.

[ $Gd(DTPA)^{2-}$ ]-enhanced, and Quantitative MR Imaging in Articular Cartilage (관절연골의 $Gd(DTPA)^{2-}$-조영증강 및 정량적 자기공명영상에 대한 실험적 연구)

  • Eun Choong-Ki;Lee Yeong-Joon;Park Auh-Whan;Park Yeong-Mi;Bae Jae-Ik;Ryu Ji Hwa;Baik Dae-Il;Jung Soo-Jin;Lee Seon-Joo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.100-108
    • /
    • 2004
  • Purpose : Early degeneration of articular cartilage is accompanied by a loss of glycosaminoglycan (GAG) and the consequent change of the integrity. The purpose of this study was to biochemically quantify the loss of GAG, and to evaluate the $Gd(DTPA)^{2-}$-enhanced, and T1, T2, rho relaxation map for detection of the early degeneration of cartilage. Materials and Methods : A cartilage-bone block in size of $8mm\;\times\;10mm$ was acquired from the patella in each of three pigs. Quantitative analysis of GAG of cartilage was performed at spectrophotometry by use of dimethylmethylene blue. Each of cartilage blocks was cultured in one of three different media: two different culture media (0.2 mg/ml trypsin solution, 1mM Gd $(DTPA)^{2-}$ mixed trypsin solution) and the control media (phosphate buffered saline (PBS)). The cartilage blocks were cultured for 5 hrs, during which MR images of the blocks were obtained at one hour interval (0 hr, 1 hr, 2 hr, 3 hr, 4 hr, 5 hr). And then, additional culture was done for 24 hrs and 48 hrs. Both T1-weighted image (TR/TE, 450/22 ms), and mixed-echo sequence (TR/TE, 760/21-168ms; 8 echoes) were obtained at all times using field of view 50 mm, slice thickness 2 mm, and matrix $256\times512$. The MRI data were analyzed with pixel-by-pixel comparisons. The cultured cartilage-bone blocks were microscopically observed using hematoxylin & eosin, toluidine blue, alcian blue, and trichrome stains. Results : At quantitation analysis, GAG concentration in the culture solutions was proportional to the culture durations. The T1-signal of the cartilage-bone block cultured in the $Gd(DTPA)^{2-}$ mixed solution was significantly higher ($42\%$ in average, p<0.05) than that of the cartilage-bone block cultured in the trypsin solution alone. The T1, T2, rho relaxation times of cultured tissue were not significantly correlated with culture duration (p>0.05). However the focal increase in T1 relaxation time at superficial and transitional layers of cartilage was seen in $Gd(DTPA)^{2-}$ mixed culture. Toluidine blue and alcian blue stains revealed multiple defects in whole thickness of the cartilage cultured in trypsin media. Conclusion : The quantitative analysis showed gradual loss of GAG proportional to the culture duration. Microimagings of cartilage with $Gd(DTPA)^{2-}$-enhancement, relaxation maps were available by pixel size of $97.9\times195\;{\mu}m$. Loss of GAG over time better demonstrated with $Gd(DTPA)^{2-}$-enhanced images than with T1, T2, rho relaxation maps. Therefore $Gd(DTPA)^{2-}$-enhanced T1-weighted image is superior for detection of early degeneration of cartilage.

  • PDF

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

THE RELATIONSHIP BETWEEN PARTICLE INJECTION RATE OBSERVED AT GEOSYNCHRONOUS ORBIT AND DST INDEX DURING GEOMAGNETIC STORMS (자기폭풍 기간 중 정지궤도 공간에서의 입자 유입률과 Dst 지수 사이의 상관관계)

  • 문가희;안병호
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2003
  • To examine the causal relationship between geomagnetic storm and substorm, we investigate the correlation between dispersionless particle injection rate of proton flux observed from geosynchronous satellites, which is known to be a typical indicator of the substorm expansion activity, and Dst index during magnetic storms. We utilize geomagnetic storms occurred during the period of 1996 ~ 2000 and categorize them into three classes in terms of the minimum value of the Dst index ($Dst_{min}$); intense ($-200nT{$\leq$}Dst_{min}{$\leq$}-100nT$), moderate($-100nT{\leq}Dst_{min}{\leq}-50nT$), and small ($-50nT{\leq}Dst_{min}{\leq}-30nT$) -30nT)storms. We use the proton flux of the energy range from 50 keV to 670 keV, the major constituents of the ring current particles, observed from the LANL geosynchronous satellites located within the local time sector from 18:00 MLT to 04:00 MLT. We also examine the flux ratio ($f_{max}/f_{ave}$) to estimate particle energy injection rate into the inner magnetosphere, with $f_{ave}$ and $f_{max}$ being the flux levels during quiet and onset levels, respectively. The total energy injection rate into the inner magnetosphere can not be estimated from particle measurements by one or two satellites. However, the total energy injection rate should be at least proportional to the flux ratio and the injection frequency. Thus we propose a quantity, “total energy injection parameter (TEIP)”, defined by the product of the flux ratio and the injection frequency as an indicator of the injected energy into the inner magnetosphere. To investigate the phase dependence of the substorm contribution to the development of magnetic storm, we examine the correlations during the two intervals, main and recovery phase of storm separately. Several interesting tendencies are noted particularly during the main phase of storm. First, the average particle injection frequency tends to increase with the storm size with the correlation coefficient being 0.83. Second, the flux ratio ($f_{max}/f_{ave}$) tends to be higher during large storms. The correlation coefficient between $Dst_{min}$ and the flux ratio is generally high, for example, 0.74 for the 75~113 keV energy channel. Third, it is also worth mentioning that there is a high correlation between the TEIP and $Dst_{min}$ with the highest coefficient (0.80) being recorded for the energy channel of 75~113 keV, the typical particle energies of the ring current belt. Fourth, the particle injection during the recovery phase tends to make the storms longer. It is particularly the case for intense storms. These characteristics observed during the main phase of the magnetic storm indicate that substorm expansion activity is closely associated with the development of mangetic storm.