• Title/Summary/Keyword: 동적관련성

Search Result 473, Processing Time 0.026 seconds

Synthetic Application of Seismic Piezo-cone Penetration Test for Evaluating Shear Wave Velocity in Korean Soil Deposits (국내 퇴적 지반의 전단파 속도 평가를 위한 탄성파 피에조콘 관입 시험의 종합적 활용)

  • Sun, Chang-Guk;Kim, Hong-Jong;Jung, Jong-Hong;Jung, Gyung-Ja
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.3
    • /
    • pp.207-224
    • /
    • 2006
  • It has been widely known that the seismic piezo-cone penetration test (SCPTu) is one of the most useful techniques for investigating the geotechnical characteristics such as static and dynamic soil properties. As practical applications in Korea, SCPTu was carried out at two sites in Busan and four sites in Incheon, which are mainly composed of alluvial or marine soil deposits. From the SCPTu waveform data obtained from the testing sites, the first arrival times of shear waves and the corresponding time differences with depth were determined using the cross-over method, and the shear wave velocity $(V_S)$ profiles with depth were derived based on the refracted ray path method based on Snell's law. Comparing the determined $V_S$ profile with the cone tip resistance $(q_t)$ profile, both trends of profiles with depth were similar. For the application of the conventional CPTu to earthquake engineering practices, the correlations between $V_S$ and CPTu data were deduced based on the SCPTu results. For the empirical evaluation of $V_S$ for all soils together with clays and sands which are classified unambiguously in this study by the soil behavior type classification index $(I_C)$, the authors suggested the $V_S-CPTu$ data correlations expressed as a function of four parameters, $q_t,\;f_s,\;\sigma'_{v0}$ and $B_q$, determined by multiple statistical regression modeling. Despite the incompatible strain levels of the downhole seismic test during SCPTu and the conventional CPTu, it is shown that the $V_S-CPTu$ data correlations for all soils, clays and sands suggested in this study is applicable to the preliminary estimation of $V_S$ for the soil deposits at a part in Korea and is more reliable than the previous correlations proposed by other researchers.

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

Analyzing Contextual Polarity of Unstructured Data for Measuring Subjective Well-Being (주관적 웰빙 상태 측정을 위한 비정형 데이터의 상황기반 긍부정성 분석 방법)

  • Choi, Sukjae;Song, Yeongeun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.83-105
    • /
    • 2016
  • Measuring an individual's subjective wellbeing in an accurate, unobtrusive, and cost-effective manner is a core success factor of the wellbeing support system, which is a type of medical IT service. However, measurements with a self-report questionnaire and wearable sensors are cost-intensive and obtrusive when the wellbeing support system should be running in real-time, despite being very accurate. Recently, reasoning the state of subjective wellbeing with conventional sentiment analysis and unstructured data has been proposed as an alternative to resolve the drawbacks of the self-report questionnaire and wearable sensors. However, this approach does not consider contextual polarity, which results in lower measurement accuracy. Moreover, there is no sentimental word net or ontology for the subjective wellbeing area. Hence, this paper proposes a method to extract keywords and their contextual polarity representing the subjective wellbeing state from the unstructured text in online websites in order to improve the reasoning accuracy of the sentiment analysis. The proposed method is as follows. First, a set of general sentimental words is proposed. SentiWordNet was adopted; this is the most widely used dictionary and contains about 100,000 words such as nouns, verbs, adjectives, and adverbs with polarities from -1.0 (extremely negative) to 1.0 (extremely positive). Second, corpora on subjective wellbeing (SWB corpora) were obtained by crawling online text. A survey was conducted to prepare a learning dataset that includes an individual's opinion and the level of self-report wellness, such as stress and depression. The participants were asked to respond with their feelings about online news on two topics. Next, three data sources were extracted from the SWB corpora: demographic information, psychographic information, and the structural characteristics of the text (e.g., the number of words used in the text, simple statistics on the special characters used). These were considered to adjust the level of a specific SWB. Finally, a set of reasoning rules was generated for each wellbeing factor to estimate the SWB of an individual based on the text written by the individual. The experimental results suggested that using contextual polarity for each SWB factor (e.g., stress, depression) significantly improved the estimation accuracy compared to conventional sentiment analysis methods incorporating SentiWordNet. Even though literature is available on Korean sentiment analysis, such studies only used only a limited set of sentimental words. Due to the small number of words, many sentences are overlooked and ignored when estimating the level of sentiment. However, the proposed method can identify multiple sentiment-neutral words as sentiment words in the context of a specific SWB factor. The results also suggest that a specific type of senti-word dictionary containing contextual polarity needs to be constructed along with a dictionary based on common sense such as SenticNet. These efforts will enrich and enlarge the application area of sentic computing. The study is helpful to practitioners and managers of wellness services in that a couple of characteristics of unstructured text have been identified for improving SWB. Consistent with the literature, the results showed that the gender and age affect the SWB state when the individual is exposed to an identical queue from the online text. In addition, the length of the textual response and usage pattern of special characters were found to indicate the individual's SWB. These imply that better SWB measurement should involve collecting the textual structure and the individual's demographic conditions. In the future, the proposed method should be improved by automated identification of the contextual polarity in order to enlarge the vocabulary in a cost-effective manner.

Re-Analysis of Clark Model Based on Drainage Structure of Basin (배수구조를 기반으로 한 Clark 모형의 재해석)

  • Park, Sang Hyun;Kim, Joo Cheol;Jeong, Dong Kug;Jung, Kwan Sue
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2255-2265
    • /
    • 2013
  • This study presents the width function-based Clark model. To this end, rescaled width function with distinction between hillslope and channel velocity is used as time-area curve and then it is routed through linear storage within the framework of not finite difference scheme used in original Clark model but analytical expression of linear storage routing. There are three parameters focused in this study: storage coefficient, hillslope velocity and channel velocity. SCE-UA, one of the popular global optimization methods, is applied to estimate them. The shapes of resulting IUHs from this study are evaluated in terms of the three statistical moments of hydrologic response functions: mean, variance and the third moment about the center of IUH. The correlation coefficients to the three statistical moments simulated in this study against these of observed hydrographs were estimated at 0.995 for the mean, 0.993 for the variance and 0.983 for the third moment about the center of IUH. The shape of resulting IUHs from this study give rise to satisfactory simulation results in terms of the mean and variance. But the third moment about the center of IUH tend to be overestimated. Clark model proposed in this study is superior to the one only taking into account mean and variance of IUH with respect to skewness, peak discharge and peak time of runoff hydrograph. From this result it is confirmed that the method suggested in this study is useful tool to reflect the heterogeneity of drainage path and hydrodynamic parameters. The variation of statistical moments of IUH are mainly influenced by storage coefficient and in turn the effect of channel velocity is greater than the one of hillslope velocity. Therefore storage coefficient and channel velocity are the crucial factors in shaping the form of IUH and should be considered carefully to apply Clark model proposed in this study.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

The application of photographs resources for constructive social studies (구성주의적 사회과 교육을 위한 사진자료 활용방안)

  • Lee, Ki-Bok;Hwang, Hong-Seop
    • Journal of the Korean association of regional geographers
    • /
    • v.6 no.3
    • /
    • pp.117-138
    • /
    • 2000
  • This study is, from the view point of constructive social studies which is the foundation of the 7th curriculum, to explore whether there is any viable program and to investigate it by which students, using photo resources in social studies, can organize their knowledge in the way of self-directed thinking. The main results are as follows: If it is a principle of knowledge construction process of constructive social studies that individual construction (cognitive construction) develops into communal construction(social construction) and yet communal construction develops itself, interacting with individual construction, it will be meet the objectives of social studies. In social studies, photos are a powerful communication tool. communicating with photos enables to invoke not only the visual aspects but also invisible aspects of social phenomena from photos. It, therefore, can help develop thinking power through inquiry learning, which is one of the emphasis of the 7th curriculum. Having analyzed photo resources appeared on the regional textbooks in elementary social studies, they have been appeared that even though the importance and amount of space photo resources occupy per page is big with regard to total resources, most of the photos failed to lad to self-directed thinking but just assistant material in stead. Besides, there appeared some problems with the title, variety, size, position, tone of color, visibility of the photos, and further with the combination of the photos. Developing of photo resources for constructive social studies is to overcome some problems inherent in current text books and to reflect the theoretical background of the 7th curriculum. To develop the sort of photo that can realize the point just mentioned, it would be highly preferable to provide photo database to facilitate study with homepage through web-based interaction. To take advantage of constructive photo resources, the instruction is strategized in four stages, intuition, conflict, accommodation, and equilibration stage. With the advancement of the era of image culture, curriculum developers are required to develop dynamic, multidimensional digital photos rather than static photos when develop text books.

  • PDF

Tectonic Movement in the Korean Peninsula (I): The Spatial Distribution of Tectonic Movement Identified by Terrain Analyses (한반도의 지반운동 ( I ): DEM 분석을 통한 지반운동의 공간적 분포 규명)

  • Park, Soo-Jin
    • Journal of the Korean Geographical Society
    • /
    • v.42 no.3 s.120
    • /
    • pp.368-387
    • /
    • 2007
  • In order to explain geomorphological characteristics of the Korean Peninsula, it is necessary to understand the spatial distribution of tectonic movements and its causes. Even though geomorphological elements which might have been formed by tectonic movements(e.g. tilted overall landform, erosion surface, river terrace, marine terraces, etc.) have long been considered as main geomorphological research topics in Korea, the knowledge on the spatial distribution of tectonic movement is still limited. This research aims to identify the spatial distributions of tectonic movement via sequential analyses of Digital Elevation Model(DEM). This paper first developed a set of terrain analysis techniques derived from theoretical interrelationships between tectonic uplifts and landsurface denudation processes. The terrain analyses used in this research assume that elevations along major drainage basin divides might preserve original landsurfaces(psuedo-landsuface) that were formed by tectonic movement with relatively little influence by denudation processes. Psuedo-landsurfaces derived from a DEM show clear spatial distribution patterns with distinct directional alignments. Lines connecting psuedo-landsufaces in a certain direction are defined as psuedo-landsurface axes, which are again categorized into two groups: the first is uplift psuedo-landsurface axes that indicate the axis of landmass uplift; and the second is denudational psuedo-landsurface axes that cross step-shaped pusedo-landsurfaces formed via surface denudation. In total, 13 axes of pusedo-landsurface are identified in the Korean Peninsula, which show distinct direction, length, and relative uplift rate. Judging from the distribution of psudo-landsurfaces and their axes, it is concluded that the Korean Peninsula ran be divided into four tectonic regions, which are named as the Northern Tectonic Region, Center Tectonic Region, Southern Tectonic Region, and East Sea Tectonic Region, respectively. The Northern Tectonic Region had experienced a regional uplift centered at the Kaema plateau, and the rate of uplift gradually decreased toward southern, western and eastern directions. The Center Tectonic Region shows an arch-shaped uplift. Its uplift rate is the highest along the East Sea and the rate decreases towards the Yellow sea. The Southern Tectonic Region shows an asymmetric uplift centered a line connecting Dukyu and Jiri Mountains in the middle of the region. The eastern side of the Southern Regions shows higher uplift rate than that of the western side. The East Sea Tectonic Region includes south-eastern coastal area of the peninsula and Gilju-Myeongchun Jigudae, which shows relatively recent tectonic movements in Korea. Since this research visualizes the spatial heterogeneity of long-term tenonic movement in the Korean peninsula, this would provide valuable basic information on long-term and regional differences of geomorphological evolutionary processes and regional geomorphological differences of the Korean Peninsula.

The Influence of Botulinum Toxin Type A Masticatory Efficiency (보툴리눔 A형 독소가 저작효율에 미치는 영향)

  • Park, Hyung-Uk;Kwon, Jeong-Seung;Kim, Seong Taek;Choi, Jong-Hoon;Ahn, Hyung-Joon
    • Journal of Oral Medicine and Pain
    • /
    • v.38 no.1
    • /
    • pp.53-67
    • /
    • 2013
  • This study was aimed to evaluate the masticatory efficiency after botulinum toxin type A (BTX-A) injection during 12 weeks using objective and subjective test. Also, we compared the difference of masticatory efficiency between group that injected into the masseter muscle only (M-group) and group that injected into the masseter and temporalis muscle (M-T group). The mixing ability index (MAI) was used as the objective indicator, and visual analogue scale (VAS) and food intake ability (FIA) index were used as the subjective indicators. It was concluded that masticatory efficiency was significantly lowered after a BTX-A injection into the masticatory muscle, but it gradually recovered in a predictable pattern by the 12 weeks. The disturbance of subjective masticatory efficiency was lasted longer than objective masticatory efficiency. The masticatory efficiency was lower in M-T group than M group. It was statistically significant in the VAS and FIA at 4 weeks, but the MAI showed no significancy. After 4weeks, there was rapid recovery of muscle function in M-T group, and the difference between two groups was not significant. It could be concluded that there will be no serious disturbance of mastication compared to injection is done only into the masseter muscle, even if injection is done into the masseter and temporalis muscle in dose of this study. According to the food properties, it was confirmed that people feel more discomfort on taking hard and tough foods after BTX-A injection and not only hard foods, but also intake of soft and runny foods were influenced by botulinum toxin injection.