• Title/Summary/Keyword: forward speed

Search Result 631, Processing Time 0.036 seconds

A Case of Multiple System Atrophy with Antecollis and Gait Disturbance Treated with Korean Medicine (경전굴 및 보행장애를 주소로 하는 다계통 위축증 환자의 한의 치료 증례 1례)

  • Kim, Seo-young;Choi, Jeong-woo;Jeong, Hye-seon;Lee, Sang-hwa;Yang, Seung-bo;Cho, Seung-yeon;Park, Jung-mi;Ko, Chang-nam;Park, Seong-uk
    • The Journal of Internal Korean Medicine
    • /
    • v.40 no.5
    • /
    • pp.851-864
    • /
    • 2019
  • Multiple system atrophy is a neurodegenerative disease that causes diverse bodily dysfunctions (cerebellar, pyramidal, automatic, and urological, in any combination), as well as Parkinsonism. Patients with multiple system atrophy commonly display antecollis, a condition where the patient's head tilts forward more than 45 degrees. Despite its common occurrence in these patients, no current standardized therapies are effective for treating antecollis. In this study, Korean medicinal treatments, including Chuna manual therapy, pharmaco-acupuncture, bee venom acupuncture, acupuncture, herbal medicine, and moxibustion therapy, were administered to the patient over a 27-day period. After the treatment, assessments of the head position on the EPIS-PD scale (Part I) and at a standing position from the side (Part II) both revealed improvements. As the head flexion angle decreased, the patient's head posture improved, as determined by a decrease in angle from 80 degrees to 30 degrees in the upright, standing position. As a result, patients who previously were unable to walk without the support of walking frames could now roam freely and independently, with significant increases in both walking speed and distance. In essence, this study suggests that Korean medicine is an effective treatment for patients with multiple system atrophy who suffer from antecollis and gait disorders.

Analysis of the Damaged Range Caused by LPG Leakage and Vapor Clouds Considering the Cold Air Flow (찬공기 흐름을 고려한 LPG 누출 및 증기운에 의한 피해 영향 범위 분석)

  • Gu, Yun-Jeong;Song, Bonggeun;Lee, Wonhee;Song, Byunghun;Shin, Junho
    • Journal of the Korean Institute of Gas
    • /
    • v.26 no.4
    • /
    • pp.27-35
    • /
    • 2022
  • When LPG leaks from the storage tank, the gas try to sink to the ground because LPG is heavier than air. The gas easily creates vapor clouds causing aggressive accidents in no airflow. Therefore, It is important to prevent in advance by analyzing the damaged range caused from LPG leakage and vapor clouds. So, this study analyzed the range of damaged by LPG leakage and vapor clouds with consideration of the cold air flow which is generated by the topographical characteristics and the land use status at night time in the Jeju Hagari. As a result of the cold air flow using KLAM_21, about 2 m/s of cold air was introduced in from the southeast due to the influence of the terrain. The range of damaged by LPG leakage and vapor cloud was analyzed using ALOHA. When the leak hole size is 10 cm at the wind speed of 2 m/s, the range corresponding to LEL 60 % (12,600 ppm) was 61 m which range is expected to influence in nearby residential areas. These results of this study can be used as basic data to prepare preventive measures of accidents caused by vapor cloud. Forward, it is necessary to apply CFD modeling such as FLACS to check the vapor cloud formation due to LPG leakage in a relatively narrow area and to check the cause analysis.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Sports Biomechanical Analysis of Physical Movements on the Basis of the Patterns of the Ready Poses (준비동작의 형태 변화에 따른 신체 움직임의 운동역학적 분석)

  • Lee, Joong-Sook
    • Korean Journal of Applied Biomechanics
    • /
    • v.12 no.2
    • /
    • pp.179-195
    • /
    • 2002
  • The purpose of this research is to provide a proper model by analyzing the sports biomechanical of physical movements on the basis of the two patterns(open-stance and cross-stance) at the ready-to-start pose. The subjects for this study are composed of five male handball players from P university and five female shooting players from S university. Three-way moving actions at start(right, left, and forward) are recorded with two high-speed video cameras and measured with two Force platforms and a EMG system. Three-dimensional action analyzer, GRF system, and Whole body reaction movement system are used to figure out the moving mechanisms at the start pose. The analytic results of the moving mechanism at the start pose were as follows. 1. Through examining the three-way moving actions at start, I have found the cross-stance pose is better for the moving speed of body weight balance than the open-stance one. 175 degree of knee joint angle at "take-off" and 172 degree of hip joint angle were best for the start pose. 2. The Support time and GRF data shows that the quickest center of gravity shift was occurred when cross-stanced male subjects started to move toward his lefthand side. The quickest male's average supporting time of left and right foot is 0.19${\pm}$0.07 sec., 0.26${\pm}$0.06sec. respectively. The supporting time difference between two feet is 0.07sec. 3. Through analyzing GRF of moving actions at start pose, I have concluded that more than 1550N are overloaded on one foot at the open-stance start, and the overloaded force may cause physical injury. However, at the cross-stance pose, The GRF are properly dispersed on both feet, and maximum 1350N are loaded on one foot.

Feasibility of Ocean Survey by using Ocean Acoustic Tomography in southwestern part of the East Sea (동해 남서해역에서 해양음향 토모그래피 운용에 의한 해양탐사 가능성)

  • Han, Sang-Kyu;Na, Jung-Yul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.75-82
    • /
    • 1994
  • The ray paths and travel times of sound wave in the ocean depend on the physical properties of the propagating media. Ocean Acoustic Tomography(OAT), which is inversely estimate the travel time variations between fixed sources and receivers the physical properties of the corresponding media can he understood. To apply ocean survey technology by using the OAT, the tomographic procedure requires forward problem that variation of the travel times be identified with the variability of the medium. Also, received signals must be satisfied the necessary conditions of ray path stability, identification and resolution in order for OAT to work. The canonical ocean has been determined based on the historical data and its travel time and ray path are used as reference values. The sound speed of canonical ocean in the East Sea is about 1523 m/s at the surface and 1458 m/s at the sound channel axis(400m). Sound speeds in the East Sea are perturbed by warm eddy whose horizontal extension is more than 100 km with deeper than 200 m in depth scale. In this study, an acoustic source and receiver are placed at the depth above the sound channel axis, 350 m, and are separated by 200 km range. Ray paths are identified by the ray theory methed in a range dependent medium whose sound speeds are functions of a range and depth. The eigenray information obtained from interpolation between the rays bracketing the receiver are used to simulate the received signal by convolution of source signal with the eigenray informations. The source signal is taken as a 400 Hz rectangular pulse signal, bandwidth is 16 Hz and pulse length is 64 ms. According to the analysis of the received signal and identified ray path by using numerical model of underwater sound propagation, simulated signals satisfy the necessary conditions of OAT, applied in the East Sea.

  • PDF

UNDERWATER DISTRIBUTION OF VESSEL NOISE (선박소음의 수중분포에 관한 연구)

  • PARK Jung Hee
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.10 no.4
    • /
    • pp.227-235
    • /
    • 1977
  • The noise pressure scattered underwater on account of the engine revolution of a pole and liner, Kwan-Ak-San(G. T. 234.96), was measured at the locations of Lat. $34^{\circ}47'N$, Long. $128^{\circ}53'E$ on the 16th of August 1976 and Lat. $34^{\circ}27'N$, Long. $128^{\circ}23'E$ on the 28th of July, 1977. The noise pressure passed through each observation point (Nos. 1 to 5), which was established at every 10m distance at circumference of outside hull was recorded when the vessel was cruising and drifted. In case of drifting, the revolution of engine was fixed at 600 r. p. m. and the noise was recorded at every 10 m distance apart from observation point No. 3 in both horizontal and vertical directions with $90^{\circ}$ toward the stern-bow line. In case of cruising, the engine was kept in a full speed at 700 r.p.m. and the sounds passed through underwater in 1 m depth were also recorded while the vessel moved back and forth. The noise pressure was analyzed with sound level meter (Bruel & Kjar 2205, measuring range 37-140 dB) at the anechoic chamber in the Institute of Marine Science, National Fisheries University of Busan. The frequency and sound waves of the noise were analyzed in the Laboratory of Navigation Instrument. From the results, the noise pressure was closely related to the engine revolution shelving that the noise pressure marked 100 dB when .400 r. p. m. and increase of 100 r. p. m. resulted in 1 dB increase in noise pressure and the maximum appeared at 600 r. p. m. (Fig.5). When the engine revolution was fixed at 700 r. p. m., the noise pressures passed through each observation point (Nos. 1 to 5) placed at circumference of out side hull were 75,78,76,74 and 68 dB, the highest at No.2, in case of keeping under way while 75,76,77,70 and 67 dB, the highest at No.3 in case of drifting respectively (Fig.5). When the vessel plyed 1,400 m distance at 700 r.p.m., the noise pressure were 67 dB at the point 0 m, 64 dB at 600m and 56 dB at 1,400m on forward while 72 at 0 m, 66 at 600 m and 57 dB at 1,400 m on backward respectively indicating the Doppler effects 5 dB at 0 m and 3 dB at 200 m(Fig.6). The noise pressures passed through the points apart 1,10,20,30,40 and 50 m depth underwater from the observation point No.7 (horizontal distance 20 m from the point No.3) were 68,75,62,59,55 and 51 dB respectively as the vessel was being drifted maintaining the engine revolution at 600 r. p. m. (Fig. 8-B) whereas the noise pressures at the observation points Nos.6,7,8,9 and 10 of 10 m depth underwater were 64,75,55,58,58 and 52 dB respectively(Fig.8-A).

  • PDF

The Validity and Reliability of 'Computerized Neurocognitive Function Test' in the Elementary School Child (학령기 정상아동에서 '전산화 신경인지기능검사'의 타당도 및 신뢰도 분석)

  • Lee, Jong-Bum;Kim, Jin-Sung;Seo, Wan-Seok;Shin, Hyoun-Jin;Bai, Dai-Seg;Lee, Hye-Lin
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.11 no.2
    • /
    • pp.97-117
    • /
    • 2003
  • Objective: This study is to examine the validity and reliability of Computerized Neurocognitive Function Test among normal children in elementary school. Methods: K-ABC, K-PIC, and Computerized Neurocognitive Function Test were performed to the 120 body of normal children(10 of each male and female) from June, 2002 to January, 2003. Those children had over the average of intelligence and passed the rule out criteria. To verify test-retest reliability for those 30 children who were randomly selected, Computerized Neurocognitive Function Test was carried out again 4 weeks later. Results: As a results of correlation analysis for validity test, four of continues performance tests matched with those on adults. In the memory tests, results presented the same as previous research with a difference between forward test and backward test in short-term memory. In higher cognitive function tests, tests were consist of those with different purpose respectively. After performing factor analysis on 43 variables out of 12 tests, 10 factors were raised and the total percent of variance was 75.5%. The reasons were such as: 'sustained attention, information processing speed, vigilance, verbal learning, allocation of attention and concept formation, flexibility, concept formation, visual learning, short-term memory, and selective attention' in order. In correlation with K-ABC to prepare explanatory criteria, selectively significant correlation(p<.0.5-001) was found in subscale of K-ABC. In the test-retest reliability test, the results reflecting practice effect were found and prominent especially in higher cognitive function tests. However, split-half reliability(r=0.548-0.7726, p<.05) and internal consistency(0.628-0.878, p<.05) of each examined group were significantly high. Conclusion: The performance of Computerized Neurocognitive Function Test in normal children represented differ developmental character than that in adult. And basal information for preparing the explanatory criteria could be acquired by searching for the relation with standardized intelligence test which contains neuropsycological background.

  • PDF

Right of disposition of cargo and Air waybill (송하인의 운송물 처분청구권과 항공화물운송장)

  • Nam, Hyun-Sook;Choi, June-Sun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.177-199
    • /
    • 2015
  • Commerce enriches human life enriched and within commerce, transportation of cargo is arguably the most important in business transactions. Traditionally, marine transport has been major commercial transaction, but carriage cargo by air is on the increase. While the fare for freight in comparison with that of ocean is higher, air freight has many benefits that justify the higher shipping fee; lower insurance premium, packing charges, inventory control, cost management and especially speed. Therefore, air freight transport is accumulating gradually. An air waybill(AWB) is needed in the air transport flow. It is a nonnegotiable security, so the holder cannot transfer of a right to a third party. Some scholars suggest that a negotiable AWB is needed. However, it seems nearly impossible to do so; an e-AWB use shows a gain in numbers, even if it has not met expectations. Going forward, it would appear reasonable to conduct a follow-up study on the utility and legal problem for e-AWB. After sending goods, the consignor has the right of disposition of cargo in some cases, and more research is necessary, because it is related to change of ownership and a trade settlement. According to WATS (World Airlines Transport Statistics), the Korean Air took third place in international freight in 2014, and fifth in total, domestic and international to great acclaim. However, there is a lack of research supporting the business showing. It is hope that more studies on e-AWB, stoppage in transit, and a risk of outstanding amount, etc. connect to develop Korean air freight industry.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A study on the change effect of emission regulation mode on vehicle emission gas (배기가스 규제 모드 변화가 차량 배기가스에 미치는 영향 연구)

  • Lee, Min-Ho;Kim, Ki-Ho;Lee, Joung-Min
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.4
    • /
    • pp.1108-1119
    • /
    • 2018
  • As the interest on the air pollution is gradually rising at home and abroad, automotive and fuel researchers have been studied on the exhaust and greenhouse gas emission reduction from vehicles through a lot of approaches, which consist of new engine design, innovative after-treatment systems, using clean (eco-friendly alternative) fuels and fuel quality improvement. This research has brought forward two main issues : exhaust emissions (regulated and non-regulated emissions, PM particle matter) and greenhouse gases of vehicle. Exhaust emissions and greenhouse gases of automotive had many problem such as the cause of ambient pollution, health effects. In order to reduce these emissions, many countries are regulating new exhaust gas test modes. Worldwide harmonized light-duty vehicle test procedure (WLTP) for emission certification has been developed in WP.29 forum in UNECE since 2007. This test procedure was applied to domestic light duty diesel vehicles at the same time as Europe. The air pollutant emissions from light-duty vehicles are regulated by the weight per distance, which the driving cycles can affect the results. Exhaust emissions of vehicle varies substantially based on climate conditions, and driving habits. Extreme outside temperatures tend to increasing the emissions, because more fuel must be used to heat or cool the cabin. Also, high driving speeds increases the emissions because of the energy required to overcome increased drag. Compared with gradual vehicle acceleration, rapid vehicle acceleration increases the emissions. Additional devices (air-conditioner and heater) and road inclines also increases the emissions. In this study, three light-duty vehicles were tested with WLTP, NEDC, and FTP-75, which are used to regulate the emissions of light-duty vehicles, and how much emissions can be affected by different driving cycles. The emissions gas have not shown statistically meaningful difference. The maximum emission gas have been found in low speed phase of WLTP which is mainly caused by cooled engine conditions. The amount of emission gas in cooled engine condition is much different as test vehicles. It means different technical solution requires in this aspect to cope with WLTP driving cycle.