• Title/Summary/Keyword: Distance weight

Search Result 951, Processing Time 0.029 seconds

Studies on the Rice Yield Decreased by Ground Water Irrigation and Its Preventive Methods (지하수 관개에 의한 수도의 멸준양상과 그 방지책에 관한 연구)

  • 한욱동
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.16 no.1
    • /
    • pp.3225-3262
    • /
    • 1974
  • The purposes of this thesis are to clarify experimentally the variation of ground water temperature in tube wells during the irrigation period of paddy rice, and the effect of ground water irrigation on the growth, grain yield and yield components of the rice plant, and, furthermore, when and why the plant is most liable to be damaged by ground water, and also to find out the effective ground water irrigation methods. The results obtained in this experiment are as follows; 1. The temperature of ground water in tube wells varies according to the location, year, and the depth of the well. The average temperatures of ground water in a tubewells, 6.3m, 8.0m deep are $14.5^{\circ}C$ and $13.1^{\circ}C$, respercively, during the irrigation period of paddy rice (From the middle of June to the end of September). In the former the temperature rises continuously from $12.3^{\circ}C$ to 16.4$^{\circ}C$ and in the latter from $12.4^{\circ}C$ to $13.8^{\circ}C$ during the same period. These temperatures are approximately the same value as the estimated temperatures. The temperature difference between the ground water and the surface water is approximately $11^{\circ}C$. 2. The results obtained from the analysis of the water quality of the "Seoho" reservoir and that of water from the tube well show that the pH values of the ground water and the surface water are 6.35 and 6.00, respectively, and inorganic components such as N, PO4, Na, Cl, SiO2 and Ca are contained more in the ground water than in the surface water while K, SO4, Fe and Mg are contained less in the ground water. 3. The response of growth, yield and yield components of paddy rice to ground water irrigation are as follows; (l) Using ground water irrigation during the watered rice nursery period(seeding date: 30 April, 1970), the chracteristics of a young rice plant, such as plant height, number of leaves, and number of tillers are inferior to those of young rice plants irrigated with surface water during the same period. (2) In cases where ground water and surface water are supplied separately by the gravity flow method, it is found that ground water irrigation to the rice plant delays the stage at which there is a maximum increase in the number of tillers by 6 days. (3) At the tillering stage of rice plant just after transplanting, the effect of ground water irrigation on the increase in the number of tillers is better, compared with the method of supplying surface water throughout the whole irrigation period. Conversely, the number of tillers is decreased by ground water irrigation at the reproductive stage. Plant height is extremely restrained by ground water irrigation. (4) Heading date is clearly delayed by the ground water irrigation when it is practised during the growth stages or at the reproductive stage only. (5) The heading date of rice plants is slightly delayed by irrigation with the gravity flow method as compared with the standing water method. (6) The response of yield and of yield components of rice to ground water irrigation are as follows: \circled1 When ground water irrigation is practised during the growth stages and the reproductive stage, the culm length of the rice plant is reduced by 11 percent and 8 percent, respectively, when compared with the surface water irrigation used throughout all the growth stages. \circled2 Panicle length is found to be the longest on the test plot in which ground water irrigation is practised at the tillering stage. A similar tendency as that seen in the culm length is observed on other test plots. \circled3 The number of panicles is found to be the least on the plot in which ground water irrigation is practised by the gravity flow method throughout all the growth stages of the rice plant. No significant difference is found between the other plots. \circled4 The number of spikelets per panicle at the various stages of rice growth at which_ surface or ground water is supplied by gravity flow method are as follows; surface water at all growth stages‥‥‥‥‥ 98.5. Ground water at all growth stages‥‥‥‥‥‥62.2 Ground water at the tillering stage‥‥‥‥‥ 82.6. Ground water at the reproductive stage ‥‥‥‥‥ 74.1. \circled5 Ripening percentage is about 70 percent on the test plot in which ground water irrigation is practised during all the growth stages and at the tillering stage only. However, when ground water irrigation is practised, at the reproductive stage, the ripening percentage is reduced to 50 percent. This means that 20 percent reduction in the ripening percentage by using ground water irrigation at the reproductive stage. \circled6 The weight of 1,000 kernels is found to show a similar tendency as in the case of ripening percentage i. e. the ground water irrigation during all the growth stages and at the reproductive stage results in a decreased weight of the 1,000 kernels. \circled7 The yield of brown rice from the various treatments are as follows; Gravity flow; Surface water at all growth stages‥‥‥‥‥‥514kg/10a. Ground water at all growth stages‥‥‥‥‥‥428kg/10a. Ground water at the reproductive stage‥‥‥‥‥‥430kg/10a. Standing water; Surface water at all growh stages‥‥‥‥‥‥556kg/10a. Ground water at all growth stages‥‥‥‥‥‥441kg/10a. Ground water at the reproductive stage‥‥‥‥‥‥450kg/10a. The above figures show that ground water irrigation by the gravity flow and by the standing water method during all the growth stages resulted in an 18 percent and a 21 percent decrease in the yield of brown rice, respectively, when compared with surface water irrigation. Also ground water irrigation by gravity flow and by standing water resulted in respective decreases in yield of 16 percent and 19 percent, compared with the surface irrigation method. 4. Results obtained from the experiments on the improvement of ground water irrigation efficiency to paddy rice are as follows; (1) When the standing water irrigation with surface water is practised, the daily average water temperature in a paddy field is 25.2$^{\circ}C$, but, when the gravity flow method is practised with the same irrigation water, the daily average water temperature is 24.5$^{\circ}C$. This means that the former is 0.7$^{\circ}C$ higher than the latter. On the other hand, when ground water is used, the daily water temperatures in a paddy field are respectively 21.$0^{\circ}C$ and 19.3$^{\circ}C$ by practising standing water and the gravity flow method. It can be seen that the former is approximately 1.$0^{\circ}C$ higher than the latter. (2) When the non-water-logged cultivation is practised, the yield of brown rice is 516.3kg/10a, while the yield of brown rice from ground water irrigation plot throughout the whole irrigation period and surface water irrigation plot are 446.3kg/10a and 556.4kg/10a, respectivelely. This means that there is no significant difference in yields between surface water irrigation practice and non-water-logged cultivation, and also means that non-water-logged cultivation results in a 12.6 percent increase in yield compared with the yield from the ground water irrigation plot. (3) The black and white coloring on the inside surface of the water warming ponds has no substantial effect on the temperature of the water. The average daily water temperatures of the various water warming ponds, having different depths, are expressed as Y=aX+b, while the daily average water temperatures at various depths in a water warming pond are expressed as Y=a(b)x (where Y: the daily average water temperature, a,b: constants depending on the type of water warming pond, X; water depth). As the depth of water warning pond is increased, the diurnal difference of the highest and the lowest water temperature is decreased, and also, the time at which the highest water temperature occurs, is delayed. (4) The degree of warming by using a polyethylene tube, 100m in length and 10cm in diameter, is 4~9$^{\circ}C$. Heat exchange rate of a polyethylene tube is 1.5 times higher than that or a water warming channel. The following equation expresses the water warming mechanism of a polyethylene tube where distance from the tube inlet, time in day and several climatic factors are given: {{{{ theta omega (dwt)= { a}_{0 } (1-e- { x} over { PHI v })+ { 2} atop { SUM from { { n}=1} { { a}_{n } } over { SQRT { 1+ {( n omega PHI) }^{2 } } } } LEFT { sin(n omega t+ { b}_{n }+ { tan}^{-1 }n omega PHI )-e- { x} over { PHI v }sin(n omega LEFT ( t- { x} over {v } RIGHT ) + { b}_{n }+ { tan}^{-1 }n omega PHI ) RIGHT } +e- { x} over { PHI v } theta i}}}}{{{{ { theta }_{$\infty$ }(t)= { { alpha theta }_{a }+ { theta }_{ w'} +(S- { B}_{s } ) { U}_{w } } over { beta } , PHI = { { cpDU}_{ omega } } over {4 beta } }}}} where $\theta$$\omega$; discharged water temperature($^{\circ}C$) $\theta$a; air temperature ($^{\circ}C$) $\theta$$\omega$';ponded water temperature($^{\circ}C$) s ; net solar radiation(ly/min) t ; time(tadian) x; tube length(cm) D; diameter(cm) ao,an,bn;constants determined from $\theta$$\omega$(t) varitation. cp; heat capacity of water(cal/$^{\circ}C$ ㎥) U,Ua; overall heat transfer coefficient(cal/$^{\circ}C$ $\textrm{cm}^2$ min-1) $\omega$;1 velocity of water in a polyethylene tube(cm/min) Bs ; heat exchange rate between water and soil(ly/min)

  • PDF

STUDIES ON THE PROPAGATION OF ABALONE (전복의 증식에 관한 연구)

  • PYEN Choong-Kyu
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.3 no.3
    • /
    • pp.177-186
    • /
    • 1970
  • The spawning of the abalone, Haliotis discus hannai, was induced In October 1969 by air ex-position for about 30 minutes. At temperatures of from 14.0 to $18.8^{\circ}C$, the youngest trochophore stage was reached within 22 hours after the egg was laid. The trochophore was transformed into the veliger stage within 34 hours after fertilization. For $7\~9$ days after oviposition the veliger floated in sea water and then settled to the bottom. The peristomal shell was secreted along the outer lip of the aperture of the larval shell, and the first respiratory pore appears at about 110 days after fertilization. The shell attained a length of 0.40 mm in 15 days, 1.39 mm in 49 days, 2.14 mm in 110 days, 5.20 mm in 170 days and 10.00 mm in 228 days respectively. Monthly growth rate of the shell length is expressed by the following equation :$L=0.9981\;e^{0.18659M}$ where L is shell length and M is time in month. The density of floating larvae in the culture tank was about 10 larvae per 100 co. The number of larvae attached to a polyethylene collector ($30\times20\;cm$) ranged from 10 to 600. Mortality of the settled larvae on the polyethylene collector was about $87.0\%$ during 170 days following settlement. The culture of Nauicula sp. was made with rough polyethylene collectors hung at three different depths, namely 5 cm, 45 cm and 85 cm. At each depth the highest cell concentration appeared after $15\~17$ days, and the numbers of cells are shown as follows: $$5\;cm\;34.3\times10^4\;Cells/cm^2$$ $$45\;cm\;27.2\times10^4\;Cells/cm^2$$ $$85\;cm\;26.3\times10^4\;Cells/cm^2$$ At temperatures of from 13.0 to $14.3^{\circ}C$, the distance travelled by the larvae (3.0 mm In shell length) averaged 11.36 mm for a Period of 30 days. Their locomation was relatively active between 6 p.m. and 9 p.m., and $52.2\%$ of them moved during this period. When the larvae (2.0 mm in shell length) were kept in water at $0\;to\;\~1.8^{\circ}C$, they moved 1.15cm between 4 p.m. and 8 p.m. and 0.10 cm between midnight and 8 a.m. The relationships between shell length and body weight of the abalone sampled from three different localities are shown as follows: Dolsan-do $W=0.2479\;L^{2.5721}$ Huksan-do $W=0.1001\;L^{3.1021}$ Pohang $W=0.9632\;L^{2.0611}$

  • PDF

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Predicting Oxygen Uptake for Men with Moderate to Severe Chronic Obstructive Pulmonary Disease (COPD환자에서 6분 보행검사를 이용한 최대산소섭취량 예측)

  • Kim, Changhwan;Park, Yong Bum;Mo, Eun Kyung;Choi, Eun Hee;Nam, Hee Seung;Lee, Sung-Soon;Yoo, Young Won;Yang, Yun Jun;Moon, Joung Wha;Kim, Dong Soon;Lee, Hyang Yi;Jin, Young-Soo;Lee, Hye Young;Chun, Eun Mi
    • Tuberculosis and Respiratory Diseases
    • /
    • v.64 no.6
    • /
    • pp.433-438
    • /
    • 2008
  • Background: Measurement of the maximum oxygen uptake in patients with chronic obstructive pulmonary disease (COPD) has been used to determine the intensity of exercise and to estimate the patient's response to treatment during pulmonary rehabilitation. However, cardiopulmonary exercise testing is not widely available in Korea. The 6-minute walk test (6MWT) is a simple method of measuring the exercise capacity of a patient. It also provides high reliability data and it reflects the fluctuation in one' s exercise capacity relatively well with using the standardized protocol. The prime objective of the present study is to develop a regression equation for estimating the peak oxygen uptake ($VO_2$) for men with moderate to very severe COPD from the results of a 6MWT. Methods: A total of 33 male patients with moderate to very severe COPD agreed to participate in this study. Pulmonary function testing, cardiopulmonary exercise testing and a 6MWT were performed on their first visits. The index of work ($6M_{work}$, 6-minute walk distance [6MWD]${\times}$body weight) was calculated for each patient. Those variables that were closely related to the peak $VO_2$ were identified through correlation analysis. With including such variables, the equation to predict the peak $VO_2$ was generated by the multiple linear regression method. Results: The peak $VO_2$ averaged $1,015{\pm}392ml/min$, and the mean 6MWD was $516{\pm}195$ meters. The $6M_{work}$ (r=.597) was better correlated to the peak $VO_2$ than the 6MWD (r=.415). The other variables highly correlated with the peak $VO_2$ were the $FEV_1$ (r=.742), DLco (r=.734) and FVC (r=.679). The derived prediction equation was $VO_2$ (ml/min)=($274.306{\times}FEV_1$)+($36.242{\times}DLco$)+($0.007{\times}6M_{work}$)-84.867. Conclusion: Under the circumstances when measurement of the peak $VO_2$ is not possible, we consider the 6MWT to be a simple alternative to measuring the peak $VO_2$. Of course, it is necessary to perform a trial on much larger scale to validate our prediction equation.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

무령왕릉보존에 있어서의 지질공학적 고찰

  • 서만철;최석원;구민호
    • Proceedings of the KSEEG Conference
    • /
    • 2001.05b
    • /
    • pp.42-63
    • /
    • 2001
  • The detail survey on the Songsanri tomb site including the Muryong royal tomb was carried out during the period from May 1 , 1996 to April 30, 1997. A quantitative analysis was tried to find changes of tomb itself since the excavation. Main subjects of the survey are to find out the cause of infiltration of rain water and groundwater into the tomb and the tomb site, monitoring of the movement of tomb structure and safety, removal method of the algae inside the tomb, and air controlling system to solve high humidity condition and dew inside the tomb. For these purposes, detail survery inside and outside the tombs using a electronic distance meter and small airplane, monitoring of temperature and humidity, geophysical exploration including electrical resistivity, geomagnetic, gravity and georadar methods, drilling, measurement of physical and chemical properties of drill core and measurement of groundwater permeability were conducted. We found that the center of the subsurface tomb and the center of soil mound on ground are different 4.5 meter and 5 meter for the 5th tomb and 7th tomb, respectively. The fact has caused unequal stress on the tomb structure. In the 7th tomb (the Muryong royal tomb), 435 bricks were broken out of 6025 bricks in 1972, but 1072 bricks are broken in 1996. The break rate has been increased about 250% for just 24 years. The break rate increased about 290% in the 6th tomb. The situation in 1996 is the result for just 24 years while the situation in 1972 was the result for about 1450 years. Status of breaking of bircks represents that a severe problem is undergoing. The eastern wall of the Muryong royal tomb is moving toward inside the tomb with the rate of 2.95 mm/myr in rainy season and 1.52 mm/myr in dry season. The frontal wall shows biggest movement in the 7th tomb having a rate of 2.05 mm/myr toward the passage way. The 6th tomb shows biggest movement among the three tombs having the rate of 7.44mm/myr and 3.61mm/myr toward east for the high break rate of bricks in the 6th tomb. Georadar section of the shallow soil layer represents several faults in the top soil layer of the 5th tomb and 7th tomb. Raninwater flew through faults tnto the tomb and nearby ground and high water content in nearby ground resulted in low resistance and high humidity inside tombs. High humidity inside tomb made a good condition for algae living with high temperature and moderate light source. The 6th tomb is most severe situation and the 7th tomb is the second in terms of algae living. Artificial change of the tomb environment since the excavation, infiltration of rain water and groundwater into the tombsite and bad drainage system had resulted in dangerous status for the tomb structure. Main cause for many problems including breaking of bricks, movement of tomb walls and algae living is infiltration of rainwater and groundwater into the tomb site. Therefore, protection of the tomb site from high water content should be carried out at first. Waterproofing method includes a cover system over the tomvsith using geotextile, clay layer and geomembrane and a deep trench which is 2 meter down to the base of the 5th tomb at the north of the tomv site. Decrease and balancing of soil weight above the tomb are also needed for the sfety of tomb structures. For the algae living inside tombs, we recommend to spray K101 which developed in this study on the surface of wall and then, exposure to ultraviolet light sources for 24 hours. Air controlling system should be changed to a constant temperature and humidity system for the 6th tomb and the 7th tomb. It seems to much better to place the system at frontal room and to ciculate cold air inside tombs to solve dew problem. Above mentioned preservation methods are suggested to give least changes to tomb site and to solve the most fundmental problems. Repairing should be planned in order and some special cares are needed for the safety of tombs in reparing work. Finally, a monitoring system measuring tilting of tomb walls, water content, groundwater level, temperature and humidity is required to monitor and to evaluate the repairing work.

  • PDF

Studies on the Kiln Drying Characteristics of Several Commercial Woods of Korea (국산 유용 수종재의 인공건조 특성에 관한 연구)

  • Chung, Byung-Jae
    • Journal of the Korean Wood Science and Technology
    • /
    • v.2 no.2
    • /
    • pp.8-12
    • /
    • 1974
  • 1. If one unity is given to the prongs whose ends touch each other for estimating the internal stresses occuring in it, the internal stresses which are developed in the open prongs can be evaluated by the ratio to the unity. In accordance with the above statement, an equation was derived as follows. For employing this equation, the prongs should be made as shown in Fig. I, and be measured A and B' as indicated in Fig. l. A more precise value will result as the angle (J becomes smaller. $CH=\frac{(A-B') (4W+A) (4W-A)}{2A[(2W+(A-B')][2W-(A-B')]}{\times}100%$ where A is thickness of the prong, B' is the distance between the two prongs shown in Fig. 1 and CH is the value of internal stress expressed by percentage. It precision is not required, the equation can be simplified as follows. $CH=\frac{A-B'}{A}{\times}200%$ 2. Under scheduled drying condition III the kiln, when the weight of a sample board is constant, the moisture content of the shell of a sample board in the case of a normal casehardening is lower than that of the equilibrium moisture content which is indicated by the Forest Products Laboratory, U. S. Department of Agriculture. This result is usually true, especially in a thin sample board. A thick unseasoned or reverse casehardened sample does not follow in the above statement. 3. The results in the comparison of drying rate with five different kinds of wood given in Table 1 show that the these drying rates, i.e., the quantity of water evaporated from the surface area of I centimeter square per hour, are graded by the order of their magnitude as follows. (1) Ginkgo biloba Linne (2) Diospyros Kaki Thumberg. (3) Pinus densiflora Sieb. et Zucc. (4) Larix kaempheri Sargent (5) Castanea crenata Sieb. et Zucc. It is shown, for example, that at the moisture content of 20 percent the highest value revealed by the Ginkgo biloba is in the order of 3.8 times as great as that for Castanea crenata Sieb. & Zucc. which has the lowest value. Especially below the moisture content of 26 percent, the drying rate, i.e., the function of moisture content in percentage, is represented by the linear equation. All of these linear equations are highly significant in testing the confficient of X i. e., moisture content in percentage. In the Table 2, the symbols are expressed as follows; Y is the quantity of water evaporated from the surface area of 1 centimeter square per hour, and X is the moisture content of the percentage. The drying rate is plotted against the moisture content of the percentage as in Fig. 2. 4. One hundred times the ratio(P%) of the number of samples occuring in the CH 4 class (from 76 to 100% of CH ratio) within the total number of saplmes tested to those of the total which underlie the given SR ratio is measured in Table 3. (The 9% indicated above is assumed as the danger probability in percentage). In summarizing above results, the conclusion is in Table 4. NOTE: In Table 4, the column numbers such as 1. 2 and 3 imply as follows, respectively. 1) The minimum SR ratio which does not reveal the CH 4, class is indicated as in the column 1. 2) The extent of SR ratio which is confined in the safety allowance of 30 percent is shown in the column 2. 3) The lowest limitation of SR ratio which gives the most danger probability of 100 percent is shown in column 3. In analyzing above results, it is clear that chestnut and larch easly form internal stress in comparison with persimmon and pine. However, in considering the fact that the revers, casehardening occured in fir and ginkgo, under the same drying condition with the others, it is deduced that fir and ginkgo form normal casehardening with difficulty in comparison with the other species tested. 5. All kinds of drying defects except casehardening are developed when the internal stresses are in excess of the ultimate strength of material in the case of long-lime loading. Under the drying condition at temperature of $170^{\circ}F$ and the lower humidity. the drying defects are not so severe. However, under the same conditions at $200^{\circ}F$, the lower humidity and not end coated, all sample boards develop severe drying defects. Especially the chestnut was very prone to form the drying defects such as casehardening and splitting.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Sensory Information Processing

  • Yoshimoto, Chiyoshi
    • Journal of Biomedical Engineering Research
    • /
    • v.6 no.2
    • /
    • pp.1-8
    • /
    • 1985
  • The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70$\pm$1.32mmHg/min)compared to CF dialyzers(4.32$\pm$0.55mmHg/min)(p<0.05). However, there was no observable difference in the UFR between the two dialyzers. Neither APD nor UFR showed any significant increase with an increasing number of reuses for up to more than 20reuses. A substantial number of failures observed in APD(larger than 20mmHe/min)on the reused dialyzers(2 out of 40 CP and S out 26 C-DAK) were attributed to the Possible damage on the fibers. The CF 15-11 HFDs which failed APD test did not show changes in the UFR compared to normal dialyzers indicating that APD is a more sensitive test than UFR test to evaluate the integrity of the fibers. 30527 T00401030527 ^x For quantitative measurement of reflected light from a clinical diagnostic strip, a prototype old reflectance photometer was designed. The strip loader and cassette were made to obtain more accurate reflectance parameters. The strip was illuminated at 45˚c through optical fiber and the intensity of reflected light was determined at rectanguLat angle using a photodiode. The kubelka-munk coefficient and reflection optical density were determined ar four different wavelengths(500, 550, 570 and 610nm) for blood glucose strip. For higher concentration than 300mg/41 about glucose, a saturation state of abforbance was observed at 500, 550 and 570nm. The correlation between glucose concentration and parameters was the best at 610nm. 30535 T00401030535 ^x Radiation-induced fibrosarcoma tumors were grown on the flanks of C3H mice. The mice were divided into two groups. One group was injected with Photofrin II, intravenously (2.5mg/kg body weight). The other group received no Photofrin II. Mice from both groups were irradialed for approximately 15 minutes at 100, 300, or 500 mW/cm2 with the argon (488nm/514.5 nm), dye(628nm) and gold vapor (pulsed 628 nm) laser light. A photosensitizer behaved as an added absorber. Under our experimental conditions, the presence of Photolfrin II increased surface temperature by at least 40% and the temperature rise due to 300 mW/cm2 irradiation exceeded values for hyperthermia. Light and temperature distributions with depth were estimated by a computer model. The model demonstrated the influence of wavelength on the thermal process and proved to be a valuable tool to investigate internal temperature rise. 30536 T00401030536 ^x We investigated the structural geometry of thirty-eight Korean femurs. The purpose of this study is to identify major geometrical differences between Korean femurs 3nd others that we believe belong to Caucasians so that we would be able to get insights into the femoral component design that fits Asians including Koreans. We utilized computerized tomography (CT) images of femurs extracted from cadavers. The CT images were transformed into bitmap data by using a film scanner, and then analyzed by using a commercially available software called Image v.1.0 and a Macintosh IIci computer.The resulting data were compared with already published data. The major results show that the geometry of the Korean femurs is significantly different from that of Caucasians: (1) the anteversion angle and the canal flare index are greater by the amount of approximately 8˚ and 0.5, respectively, (2) the shape of the isthmus cross section is more round, and (3) the distance between the teaser trochanter and the proximal border of the isthmus is shelter by about 15 mm. The results suggested that the femoral component suitable for Asians should be different from the currently-used components designed and manufactured mostly by European or American companies. 30537 T00401030537 ^x It is well known that nonlinear propagation characteristics of the wave in the tissue may give very useful information for the medical diagnoisis. In this paper, a new method to detect nonlinear propagation characteristics of the internal vibration in the tissue for the low frequency mechanical vibration by using bispectral analysis is proposed. In the method, low frequency vibration of f0( = 100Hz) is applied on the surface of the object, and the waveform of the internal vibration x (t) is measured from Doppler frequency modulation of silmultaneously transmitted probing ultrasonic waves. Then, the bispectra of the signal x (t) at the frequencies (f0, f0) and (f0, 2f0) are calculated to estimate the nonlinear propagation characteristics as their magnitude ratio, w here since bispectrum is free from the gaussian additive noise we can get the value with high S/N. Basic experimental system is constructed by using 3.0 MHz probing ultrasonic waves and the several experiments are carried out for some phantoms. Results show the superiority of the proposed method to the conventional method using power spectrum and also its usefulness for the tissue characterization. 30541 T00401030541 ^x This paper describes the implementation of a computerized radial pulse diagnosis by aids of a clinical expert. On this base, we composed of the radial pulse diagnosis system in korean traditional medicine. The system composed of a radial pulse wave detection system and a radial pulse diagnosis system. With a detection system, we detected Inyoung and Cheongu radial pulse wave and processed it. Then, we have got the characteristic parameters of radial pulse wave and also quantified that according to the method of Inyoung-Cheongu Comparison Radial Pulse Diagnosis. We defined the jugement standard of radial pulse diagnosis system and then we confirmed the possibility for realization of automatic radial pulse diagnosis in korean traditional medicine. 30545 T00401030545 ^x Microspheres are expected to be applied to biomedical areas such as solid-phase immunoassays, drug delivery systems, immunomagnetic cell separation. To synthesize microspheres for biomedical application, "two stage shot growth method" was developed. The uniformity ratio of synthesized microspheres was always smaller than 1.05. And the surface charge density (or the number of ionizable functional groups) of the microspheres synthesized by "two stage shot growth method" was 6~13 times higher than that of the microspheres synthesized by conventional seeded batch copolymerization. As a previous step for biomedical application, adsorption experiments of bovine albumin on microspheres were carried out under various conditions. The maximum adsorbed amount was obtained in the neighborhood of pH 4.5. Isoelectric point of bovine albumin is pH 5.0, so experimental result shows that it shifted to acid area. The adsorption isotherm was obtained, the plateau region was always reached at 2.Og/L (bulk concentration of bovine albumin).The effect of the kind and the amount of surface functional group was also examined. 30575 T00401030575 ^x A medical image workstation was developed using multimedia technique. The system based on PC-486DX was designed to acquire medical images produced by medical imaging instruments and related audio information, that is, doctors' reporting results. Input information was processed and analyzed, then the results were presented in the form of graph and animation. All the informations of the system were hierarchically related with the image as the apex. Processing and analysis algorithms were implemented so that the diagnostic accuracy could be improved. The diagnosed information can be transferred for patient diagnosis through LAN(local area network). 30592 T00401030592 ^x In the conventional infrared imaging system, complex infrared lens systems are usually used for directing collimated narrow infrared beams into the high speed 2-dimensional optic scanner. In this paper, a simple reflective infrared optic system with a 2-dimensional optic scanner is proposed for the realization of medical infrared thermography system. It has been experimentally proven that the intfrared thermography system composed of the proposed optic system has the temperature resolution of 0.1˚c under the spatial resolution of lmrad, the image matrix size of 256 X 240, and tile imaging time of 4 seconds. 30593 T00401030593 ^x In this paper, MIIS (Medical Image Information System) has been designed and implemented using INGRES RDBMS, which is based on a client/server architecture. The implemented system allows users to register and retrieve patient information, medical images and diagnostic reports. It also provides the function to display these information on workstation windows simultaneously by using the designed menu-driven graphic user interface. The medical image compression/decompression techniques are implemented and integrated into the medical image database system for the efficient data storage and the fast access through the network. 30594 T00401030594 ^x In this paper, computerized BEAM was implemented for the space domain analysis of EEG. Trans-formation from temporal summation to two-dimensional mappings is formed by 4 nearest point inter-polaton method. Methods of representation of BEAM are two. One is dot density method which classify brain electrical potential 9 levels by dot density of gray levels and the other is colour method which classify brain electrical 12 levels by red-green colours. In this BEAM, instantaneous change and average energy distribution over any arbitrary time interval of brain electrical activity could be observed and analyzed easily. In the frequency domain, the distribution of energy spectrum of a special band can easily be distinguished normality and abnormality. 30608 T00401030608 ^x Laboratory information system (LIS) is a key tool to manage laboratory data in clinical pathology. Our department has developed an information system for routine hematology using down-sized computer system. We have used an IBM 486 compatible PC with 16MB main memory, 210 MB hard disk drive, 9 RS-232C port and 24 pin dot printer. The operating system and database management system were SCO UNIX and SCO foxbase, respectively. For program development, we used Xbase language provided by SCO foxbase. The C language was used for interface purpose. To make the system use friendly, pull-down menu was used. The system connected to our hospital information system via application program interface (API), so the information related to patient and request details is automatically transmitted to our computer. Our system interfaced with fwd complete blood count analyzers(Sysmex NE-8000 and Coulter STKS) for unidirectional data tansmission from analyzer to computer. The authors suggests that this system based on down-sized computer could provide a progressive approach to total LIS based on local area network, and the implemented system could serve as a model for other hospital's LIS for routine hematology. 30609 T00401030609 ^x To develop an artificial bone substitute that is gradually degraded and replaced by the regenerated natural bone, the authors designed a composite that is consisted of calcium phosphate and collagen. To use as the structural matrix of the composite, collagen was purified from human umbilical cord. The obtained collagen was treated by pepsin to remove telopeptides, and finally, the immune-free atelocollagen was produced: The cross linked atelocollagen was highly resistant to the collagenase induced collagenolysis. The cross linked collagen demonstrated an improved tensile strength. 30618 T00401030618 ^x This paper is a study on the design of adptive filter for QRS complex detection. We propose a simple adaptive algorithm to increase capability of noise cancelation in QRS complex detection with two stage adaptive filter. At the first stage, background noise is removed and at the next stage, only spectrum of QRS complex components is passed. Two adaptive filters can afford to keep track of the changes of both noise and QRS complex. Each adaptive filter consists of prediction error filter and FIR filter The impulse response of FIR filter uses coefficients of prediction error filter. The detection rates for 105 and 108 of MIT/BIH data base were 99.3% and 97.4% respectively. 30619 T00401030619 ^x To develop an artificial bone substitute that is gradually degraded and replaced by the regenerated natural bone, the authors designed and produced a composite that is consisted of calcium phosphate and collagen. Human umbilical cord origin pepsin treated type I atelocollagen was used as the structural matrix, by which sintered or non-sintered carbonate apatite was encapsulated to form an inorganic-organic composite. With cross linking atelocollagen by UV ray irradiation, the resistance to both compressive and tensile strength was increased. Collagen degradation by the collagenase induced collagenolysis was also decreased. 30620 T00401030620 ^x We have developed a monoleaflet polymer valve as an inexpensive and viable alternative, especially for short-term use in the ventricular assist device or total artificial heart. The frame and leaflet of the polymer valve were made from polyurethane, To evaluate the hemodynamic performance of the polymer valve a comparative study of flow dynamics past a polymer valve and a St. Jude Medical prosthetic valve under physiological pulsatile flow conditions in vitro was made. Comparisons between the valves were made on the transvalvular pressure drop, regurgitation volume and maximum valve opening area. The polymer valve showed smaller regurgitation volume and transvalvular pressure drop compared to the mechanical valve at higher heart rate. The results showed that the functional characteristics of the polymer valve compared favorably with those of the mechanical valve at higher heart rate. 30621 T00401030621 ^x Explosive evaporative removal process of biological tissue by absorption of a CW laser has been simulated by using gelatin and a multimode Nd:YAG laser. Because the point of maximun temperature of laser-irradiated gelatin exists below the surface due to surface cooling, evaporation at the boiling temperature is made explosively from below the surface. The important parameters of this process are the conduction loss to laser power absorption (defined as the conduction-to-laser power parameter, Nk), the convection heat transfer at the surface to conduction loss (defined as Bi), dimensionless extinction coefficient (defined as Br.), and dimensionless irradiation time (defined as Fo). Dependence of Fo on Nk and Bi has been observed by experiment, and the results have been compared with the numerical results obtained by solving a 2-dimensional conduction equation. Fo and explosion depth (from the surface to the point of maximun temperature) are increased when Nk and Bi are increased.To find out the minimum laser power for explosive evaporative removal process, steady state analysis has been also made. The limit of Nk to induce evaporative removal, which is proportional to the inverse of the laser power, has been obtained. 30622 T00401030622 ^x N1 and N2 gross neural action potentials were measured from the round window of the guinea pig cochlea at the onset of the acoustic stimuli. N1-N2 audiograms were made by means of regulating stimulant intensities in order to produce constant N1-N2 potentials as criteria for different input tone pip frequencies. The lowest threshold was measured with an input tone pip I5 dB SPL in intensity and 12 KHz in frequency when the animal was in normal physiological condition. The procedure of experimental measurements is explained in detail. This experimental approach is very useful for the investigation of the Cochlear function. Both noN1inear and active functions of the Cochlea can be monitored by N1-N2 audiograms. 30623 T00401030623 ^x In electrical impedance tomography(EIT), we use boundary current and voltage measurements toprovide the information about the cross-sectional distribution of electrical impedance or resistivity. One of the major problems in EIT has been the inaccessibility of internal voltage or current data in finding the internal impedance values. We propose a new image reconstruction method using internal current density data measured by NMR. We obtained a two-dimensional current density distribution within a phantom by processing the real and imaginary MR images from a 4.77 NMR machine. We implemented a resistivity mage reconstruction algorithm using the finite element method and sensitivity matrix. We presented computer simulation results of the mage reconstruction algorithm and furture direction of the research. 30624 T00401030624 ^x A new method of digital image analysis technique for discrimination of cancer cell was presented in this paper. The object image was the Thyroid eland cells image that was diagnosed as normal and abnormal (two types of abnormal: follicular neoplastic cell, and papillary neoplastic cell), respectively. By using the proposed region segmentation algorithm, the cells were segmented into nucleus. The 16 feature parameters were used to calculate the features of each nucleus. A9 a consequence of using dominant feature parameters method proposed in this paper, discrimination rate of 91.11% was obtained for Thyroid Gland cells. 30625 T00401030625 ^x An electrical stimulator was designed to induce locomotion for paraplegic patients caused by central nervous system injury. Optimal stimulus parameters, which can minimize muscle fatigue and can achieve effective muscle contraction were determined in slow and fast muscles in Sprague-Dawley rats. Stimulus patterns of our stimulator were designed to simulate electromyographic activity monitored during locomotion of normal subjects. Muscle types of the lower extremity were classified according to their mechanical property of contraction, which are slow muscle (msoleus m.) and fast muscle (medial gastrocneminus m., rectus femoris m., vastus lateralis m.). Optimal parameters of electrical stimulation for slow muscles were 20 Hz, 0.2 ms square pulse. For fast muscle, 40 Hz, 0.3 ms square pulse was optimal to produce repeated contraction. Higher stimulus intensity was required when synergistic muscles were stimulated simultaneously than when they were stimulated individually. Electrical stimulation for each muscle was designed to generate bipedal locomotion, so that individual muscles alternate contraction and relaxation to simulate stance and swing phases. Portable electrical stimulator with 16 channels built in microprocessor was constructed and applied to paraplegic patients due to lumbar cord injury. The electrical stimulator restored partially gait function in paraplegic patients. 30626 T00401030626 ^x Two-Dimensional modelling of the Cochlear biomechanics is presented in this paper. The Laplace partial differential equation which represents the fluid mechanics of the Cochlea has been transformed into two-dimensional electrical transmission line. The procedure of this transformation is explained in detail. The comparison between one and two dimensional models is also presented. This electrical modelling of the basilar membrane (BM) is clearly useful for the next approach to the further. Development of active elements which are essential in the producing of the sharp tuning of the BM. This paper shows that two-dimension model is qualitatively better than one-dimensional model both in amplitude and phase responses of the BM displacement. The present model is only for frequency response. However because the model is electrical, the two-dimensional transmission line model can be extended to time response without any difficult. 30627 T00401030627 ^x A method has been proposed for the fully automatic detection of left ventricular endocardial boundary in 2D short axis echocardiogram using geometric model. The procedure has the following three distinct stages. First, the initial center is estimated by the initial center estimation algorithm which is applied to decimated image. Second, the center estimation algorithm is applied to original image and then best-fit elliptic model estimation is processed. Third, best-fit boundary is detected by the cost function which is based on the best-fit elliptic model. The proposed method shows effective result without manual intervention by a human operator. 30628 T00401030628 ^x The intelligent trajectory control method that controls moving direction and average velocity for a prosthetic arm is proposed by pattern recognition and force estimations using EMG signals. Also, we propose the real time trajectory planning method which generates continuous accelleration paths using 3 stage linear filters to minimize the impact to human body induced by arm motions and to reduce the muscle fatigue. We use combination of MLP and fuzzy filter for pattern recognition to estimate the direction of a muscle and Hogan's method for the force estimation. EMG signals are acquired by using a amputation simulator and 2 dimensional joystick motion. The simulation results of proposed prosthetic arm control system using the EMf signals show that the arm is effectively followed the desired trajectory depended on estimated force and direction of muscle movements. 30638 T00401030638 ^x A new neural network architecture for the recognition of patterns from images is proposed, which is partially based on the results of physiological studies. The proposed network is composed of multi-layers and the nerve cells in each layer are connected by spatial filters which approximate receptive fields in optic nerve fields. In the proposed method, patterns recognition for complicated images is carried out using global features as well as local features such as lines and end-points. A new generating method of matched filers representing global features is proposed in this network. 30659 T00401030659 ^x An implementation scheme of the magnetic nerve stimulator using a switching mode power supply is proposed. By using a switching mode power supply rather than a conventional linear power supply for charging high voltage capacitors, the weight and size of the magnetic nerve stimulator can be considerably reduced. Maximum output voltage of the developed magnetic nerve stimulator using the switching mode power supply is 3, 000 volts and switching time is about 100 msec. Experimental results or human nerve stimulations using the developed stimulator are presented. 30768 T00401030768 ^x In this paper, we describe the design methodology and specifications of the developed module-based bedside monitors for patient monitoring. The bedside monitor consists of a main unit and module cases with various parameter modules. The main unit includes a 12.1" TFT color LCD, a main CPU board, and peripherals such as a module controller, Ethernet LAN card, video card, rotate/push button controller, etc. The main unit can connect at maximum three module cases each of which can accommodate up to 7 parameter modules. They include the modules for electrocardiograph, respiration, invasive blood pressure, noninvasive blood pressure, temperature, and SpO2 with Plethysmograph.SpO2 with Plethysmograph.

  • PDF