• Title/Summary/Keyword: 실제적 이론

Search Result 2,831, Processing Time 0.038 seconds

An Estimation of the Efficiency and Satisfaction for EEG Practice Using the Training 10-20 Electrode System: A Questionnaire Survey (연습용 10-20 Electrode System을 이용한 뇌파검사 실습의 효율성과 만족도 평가)

  • Lee, Chang Hee;Kim, Dae Jin;Choi, Jeong Su;Lee, Jong-Woo;Lee, Min Woo;Cho, Jae Wook;Kim, Suhng Wook
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.49 no.3
    • /
    • pp.300-307
    • /
    • 2017
  • Electroencephalography (EEG) is distinct from other medical imaging tests in that it is a functional test that helps to diagnosis disorders related to the brain, such as epilepsy. The most important abilities for a medical technologist when performing an EEG are knowing the exact location of the electrode and recording the EEG wave clearly, except for artifacts. Although theoretical education and practical training are both included in the curriculum for improving these abilities, sufficient practical training has been lacking due to problems like expensive equipment and insufficient practical training time. We try to solve these issues by manufacturing the training 10-20 electrode system and by estimating the efficiency and satisfaction of the training 10-20 electrode system through a questionnaire. The time required for practical training using this system was $43.58{\pm}9.647min$, which proved to be efficient. The satisfaction score of participants who experienced curriculum practical training was improved from $7.21{\pm}2.285$ to $9.46{\pm}1.166$. Based on these findings, it is considered that practical training via the use of the training 10-20 electrode system will solve the problems, such as lack of equipment and insufficient practical training time. Nonetheless, to further improve the training 10-20 electrode system, it must overcome the limitations of developing a device capable of checking the actual brain waves and validating the exact location of electrode attachment.

Development of a Dynamic Offtracking Model on Horizontal Curve Sections (Based on Articulated Vehicles) (도로 평면곡선부에서 동적궤도이탈모형 개발에 관한 연구 (굴절차량을 중심으로))

  • 최재성;김우현
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.115-128
    • /
    • 2002
  • Dislike the tangent sections, the horizontal curve sections of roads should be designed, considering several factors : one of such factors is widening. In other words, since widening results from that when a vehicle runs on the horizontal curve sections, the rear wheels of the vehicle run not along with tracks of the front wheels but out of that, such offtracking should be exactly investigated and reflected in design of the curve sections. Especially in the case of industrial roads which semi-trailers and large trucks run frequently or arterial roads with small curve radiuses in mountainous regions. serious offtracking Phenomenons result in increasing the risk of accidents. decreasing the capacities and jeopardizing pedestrians' safety on the curve sections. For the offtracking, widening amounts of roads has been determined under the traditional presumption that vehicles run at a low speed and there is no superelevation. In fact, however, since the vehicles run at a high speed as well as at a low speed and the superelevation is installed on the horizontal curve sections in the structural aspect of roads, the existing standards for installing widening have a limitation to reflect exactly actual Phenomenons. In particular, for articulated wheel axles of a tractor and a trailer and long articulated vehicles, not only the offtracking degree is very high but also the interpretation shows different aspects from one of single axles. Comparing and reviewing the results of Korean and foreign studies related to the trailer offtracking model theory and the standards for installing widening, this study developed a realistic dynamic offtracking model which considers geometric structures of roads and speeds of vehicles, suggested how to measure widening with this model and examined applicability of the model. The findings of this study are as follows ; First. a dynamic offtracking model. which considers dynamic movements of a tractor and a trailer and the superelevation, was developed. Second, a new method to measure widening with the developed dynamic offtracking model was developed and a method to measure widening with swept path width was suggested as well. Finally, validity of the current standards for installing widening was examined by determining actual offtracking and widening amounts with the developed model and the applicability was investigated through the case studies. Compared with the existing offtracking models, the dynamic offtracking model developed in this study can reflect practically vehicle speed. dimension and geometric structural aspects of roads. In conclusion, the meaning of this study is that it reviews validity of the current standards for installing widening and provides a base to establish such standards by suggesting new methods to measure the widening with this dynamic offtracking model.

A study on the comparison by the methods of estimating the relaxation load of SEM-pile (SEM파일의 이완하중 산정방법별 이완하중량 비교 연구)

  • Kim, Hyeong-Gyu;Park, Eun-Hyung;Cho, Kook-Hwan
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.3
    • /
    • pp.543-560
    • /
    • 2018
  • With the increased development in downtown underground space facilities that vertically cross under a railway at a shallow depth, the demand for non-open cut method is increasing. However, most construction sites still adopt the pipe roof method, where medium and large diameter steel pipes are pressed in to form a roof, enabling excavation of the inside space. Among the many factors that influence the loosening region and loads that occur while pressing in steel pipes, the size of the pipe has the largest impact, and this factor may correspond to the magnitude of load applied to the underground structure inside the steel pipe roof. The super equilibrium method (SEM) has been developed to minimize ground disturbance and loosening load, and uses small diameter pipes of approximately 114 mm instead of conventional medium and large diameter pipes. This small diameter steel pipe is called an SEM pile. After SEM piles are pressed in and the grouting reinforcement is constructed, a crossing structure is pressed in by using a hydraulic jack without ground subsidence or heaving. The SEM pile, which plays the role of timbering, is a fore-poling pile of approximately 5 m length that prevents ground collapse and supports surface load during excavation of toe part. The loosening region should be adequately calculated to estimate the spacing and construction length of the piles and stiffness of members. In this paper, we conducted a comparative analysis of calculations of loosening load that occurs during the press-in of SEM pile to obtain an optimal design of SEM. We analyzed the influence of factors in main theoretical and empirical formulas applied for calculating loosening regions, and carried out FEM analysis to see an appropriate loosening load to the SEM pile. In order to estimate the soil loosening caused by actual SEM-pile indentation and excavation, a steel pipe indentation reduction model test was conducted. Soil subsidence and soil loosening were investigated quantitatively according to soil/steel pipe (H/D).

Sewer Decontamination Mechanism and Pipe Network Monitoring and Fault Diagnosis of Water Network System Based on System Analysis (시스템 해석에 기초한 하수관망 오염 매카니즘과 관망 모니터링 및 이상진단)

  • Kang, OnYu;Lee, SeungChul;Kim, MinJeong;Yu, SuMin;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.50 no.6
    • /
    • pp.980-987
    • /
    • 2012
  • Nonpoint source pollution causes leaks and overtopping, depending on the state of the sewer network as well as aggravates the pollution load of the aqueous water system as it is introduced into the sewer by wash-off. According, the need for efficient sewer monitoring system which can manage the sewage flowrate, water quality, inflow/infiltration and overflow has increased for sewer maintenance and the prevention of environmental pollution. However, the sewer monitoring is not easy since the sewer network is built in underground with the complex nature of its structure and connections. Sewer decontamination mechanism as well as pipe network monitoring and fault diagnosis of water network system on system analysis proposed in this study. First, the pollution removal pattern and behavior of contaminants in the sewer pipe network is analyzed by using sewer process simulation program, stormwater & wastewater management model for expert (XP-SWMM). Second, the sewer network fault diagnosis was performed using the multivariate statistical monitoring to monitor water quality in the sewer and detect the sewer leakage and burst. Sewer decontamination mechanism analysis with static and dynamic state system results showed that loads of total nitrogen (TN) and total phosphorous (TP) during rainfall are greatly increased than non-rainfall, which will aggravate the pollution load of the water system. Accordingly, the sewer outflow in pipe network is analyzed due to the increased flow and inflow of pollutant concentration caused by rainfall. The proposed sewer network monitoring and fault diagnosis technique can be used effectively for the nonpoint source pollution management of the urban watershed as well as continuous monitoring system.

Study on Fire Hazard Analysis along with Heater Use in the Public Use Facility Traditional Market in Winter (겨울철 다중이용시설인 전통재래시장 난방기구 사용에 따른 화재 위험성 분석에 관한 연구)

  • Ko, Jaesun
    • Journal of the Society of Disaster Information
    • /
    • v.10 no.4
    • /
    • pp.583-597
    • /
    • 2014
  • Fire caused by heater has various causes as many as the types of heater. also, lots of damage of human life and property loss are caused, since annually continuous fire accident by heater in traditional market is frequently occurring. There are not many cases of fire due to heater in most of residential facilities such as general house, apartments, etc., because they are supplied with heating boiler, however the restaurant, store and office of the market, sports center, factory, workplace, etc. still use heater, e.g. oilstove, electric heater, etc., so that they are exposed to fire hazard. Also, when investigating the number of fire due to heater, it was analyzed to occur in order of home boiler, charcoal stove, oilstove, gas heater/stove, electric stove/heater, the number of fire per human life damage was analyzed in order of gas heater/stove, oil heater/stove, electric heater/stove, briquette/coal heater. Also, gas and oil related heater were analyzed to have low frequency, however, with high fire intensity. Therefore, this research aimed at considering more scientific fire inspection and identification approach by reenacting and reviewing fire outbreak possibility caused by combustibles' contact and conductivity under the normal condition and abnormal condition in respect of ignition hazard, i.e. minimum ignition temperature, carbonization degree and heat flux along with it, due to oilstove and electric stove, which are still frequently used in public use facility, traditional market, and, of which actual fire occurrence is the most frequent. As the result of reenact test, ignition hazard appeared very small, as long as enough heat storage condition is not made in both test objects(oilstove/electric stove), however carbonization condition was analyzed to be proceeded per each part respectively. Eventually, transition to fire is the ignition due to heat storage, so that it was analyzed to ignite when minimum heat storage temperature condition of fire place is over $500^{\circ}C$. Particularly, in case of quartz pipe, the heating element of electric stove, it is rapidly heated over the temperature of $600^{\circ}C$ within the shortest time(10sec), so that the heat flux of this appears 6.26kW/m2, which was analyzed to result in damage of thermal PVC cable and second-degree burn in human body. Also, the researcher recognized that the temperature change along with Geometric View Factor and Fire Load, which display decrease of heat, are also important variables to be considered, along with distance change besides temperature condition. Therefore, the researcher considers that a manual of careful fire inspection and identification on this is necessary, also, expects that scientific and rational efforts of this research can contribute to establish manual composition and theoretical basis on henceforth fire inspection and identification.

A Study on Shape Optimization of Plane Truss Structures (평면(平面) 트러스 구조물(構造物)의 형상최적화(形狀最適化)에 관한 구연(究研))

  • Lee, Gyu won;Byun, Keun Joo;Hwang, Hak Joo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.5 no.3
    • /
    • pp.49-59
    • /
    • 1985
  • Formulation of the geometric optimization for truss structures based on the elasticity theory turn out to be the nonlinear programming problem which has to deal with the Cross sectional area of the member and the coordinates of its nodes simultaneously. A few techniques have been proposed and adopted for the analysis of this nonlinear programming problem for the time being. These techniques, however, bear some limitations on truss shapes loading conditions and design criteria for the practical application to real structures. A generalized algorithm for the geometric optimization of the truss structures which can eliminate the above mentioned limitations, is developed in this study. The algorithm developed utilizes the two-phases technique. In the first phase, the cross sectional area of the truss member is optimized by transforming the nonlinear problem into SUMT, and solving SUMT utilizing the modified Newton-Raphson method. In the second phase, the geometric shape is optimized utilizing the unidirctional search technique of the Rosenbrock method which make it possible to minimize only the objective function. The algorithm developed in this study is numerically tested for several truss structures with various shapes, loading conditions and design criteria, and compared with the results of the other algorithms to examme its applicability and stability. The numerical comparisons show that the two-phases algorithm developed in this study is safely applicable to any design criteria, and the convergency rate is very fast and stable compared with other iteration methods for the geometric optimization of truss structures.

  • PDF

Development of Mixed-bed Ion Exchange Resin Capsule for Water Quality Monitoring (수질 중 질소와 인 모니터링을 위한 혼합이온교환수지 캡슐의 개발)

  • Park, Chang-Jin;Kim, Dong-Kuk;Ok, Yong-Sik;Ryu, Kyung-Ryul;Lee, Ju-Young;Zhang, Yong-Seon;Yang, Jae-E
    • Applied Biological Chemistry
    • /
    • v.47 no.3
    • /
    • pp.344-350
    • /
    • 2004
  • This study was conducted to develop and assess the applicability of mixed-bed ion exchange resin capsules for water quality monitoring in small agricultural watershed. Recoveries of resin capsules for inorganic N and P ranged from 96 to 102%. The net activation energies and pseudo-thermodynamic parameters, such as ${\Delta}G^{o\ddag},\;{\Delta}H^{o\ddag},\;and\;{\Delta}S^{o\ddag}$ for ion adsorption by resin capsules, exhibited relatively low values, indicating the process might be governed by chemical reactions such as diffusion. However, those values increased with temperature coinciding with the theory. The reaction reached pseudo-equilibrium within 24 hours for $NH_4-N\;and\;NO_3-N$, and only 8 hours for $PO_4-P$, respectively. The selectivity of resin capsules were in the order of $NO_3\;^-\;>\;NH_4\;^+\;>\;PO_4\;^{3-}$, coinciding with that of encapsulated Amberlite IRN-150 resin. At the initial state of equilibrium, the resin adsorption quantity was linearly proportional to the mass of ions in the streams, but the rate of movement leveled off, following Langmuir-type sorption isotherm. The overall results demonstrated that the resin capsule system was suitable for water quality monitoring in small agricultural watershed, judging from the reaction mechanism(s) of the resin capsule and the significance of model in field calibration.

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF

A Fast Algorithm for Computing Multiplicative Inverses in GF(2$^{m}$) using Factorization Formula and Normal Basis (인수분해 공식과 정규기저를 이용한 GF(2$^{m}$ ) 상의 고속 곱셈 역원 연산 알고리즘)

  • 장용희;권용진
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.324-329
    • /
    • 2003
  • The public-key cryptosystems such as Diffie-Hellman Key Distribution and Elliptical Curve Cryptosystems are built on the basis of the operations defined in GF(2$^{m}$ ):addition, subtraction, multiplication and multiplicative inversion. It is important that these operations should be computed at high speed in order to implement these cryptosystems efficiently. Among those operations, as being the most time-consuming, multiplicative inversion has become the object of lots of investigation Formant's theorem says $\beta$$^{-1}$ =$\beta$$^{2}$sup m/-2/, where $\beta$$^{-1}$ is the multiplicative inverse of $\beta$$\in$GF(2$^{m}$ ). Therefore, to compute the multiplicative inverse of arbitrary elements of GF(2$^{m}$ ), it is most important to reduce the number of times of multiplication by decomposing 2$^{m}$ -2 efficiently. Among many algorithms relevant to the subject, the algorithm proposed by Itoh and Tsujii[2] has reduced the required number of times of multiplication to O(log m) by using normal basis. Furthermore, a few papers have presented algorithms improving the Itoh and Tsujii's. However they have some demerits such as complicated decomposition processes[3,5]. In this paper, in the case of 2$^{m}$ -2, which is mainly used in practical applications, an efficient algorithm is proposed for computing the multiplicative inverse at high speed by using both the factorization formula x$^3$-y$^3$=(x-y)(x$^2$+xy+y$^2$) and normal basis. The number of times of multiplication of the algorithm is smaller than that of the algorithm proposed by Itoh and Tsujii. Also the algorithm decomposes 2$^{m}$ -2 more simply than other proposed algorithms.

A Single Index Approach for Time-Series Subsequence Matching that Supports Moving Average Transform of Arbitrary Order (단일 색인을 사용한 임의 계수의 이동평균 변환 지원 시계열 서브시퀀스 매칭)

  • Moon Yang-Sae;Kim Jinho
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.42-55
    • /
    • 2006
  • We propose a single Index approach for subsequence matching that supports moving average transform of arbitrary order in time-series databases. Using the single index approach, we can reduce both storage space overhead and index maintenance overhead. Moving average transform is known to reduce the effect of noise and has been used in many areas such as econometrics since it is useful in finding overall trends. However, the previous research results have a problem of occurring index overhead both in storage space and in update maintenance since tile methods build several indexes to support arbitrary orders. In this paper, we first propose the concept of poly-order moving average transform, which uses a set of order values rather than one order value, by extending the original definition of moving average transform. That is, the poly-order transform makes a set of transformed windows from each original window since it transforms each window not for just one order value but for a set of order values. We then present theorems to formally prove the correctness of the poly-order transform based subsequence matching methods. Moreover, we propose two different subsequence matching methods supporting moving average transform of arbitrary order by applying the poly-order transform to the previous subsequence matching methods. Experimental results show that, for all the cases, the proposed methods improve performance significantly over the sequential scan. For real stock data, the proposed methods improve average performance by 22.4${\~}$33.8 times over the sequential scan. And, when comparing with the cases of building each index for all moving average orders, the proposed methods reduce the storage space required for indexes significantly by sacrificing only a little performance degradation(when we use 7 orders, the methods reduce the space by up to 1/7.0 while the performance degradation is only $9\%{\~}42\%$ on the average). In addition to the superiority in performance, index space, and index maintenance, the proposed methods have an advantage of being generalized to many sorts of other transforms including moving average transform. Therefore, we believe that our work can be widely and practically used in many sort of transform based subsequence matching methods.