• Title/Summary/Keyword: Dynamic

Search Result 39,366, Processing Time 0.091 seconds

A Study of the Capsuloligamentous Anatomy of the Glenohumeral Joint Using Magnetic Resonance Imaging and Three-Dimensional Imaging. Dynamic In Vivo Study (자기공명 영상 및 3차원 영상을 이용한 견관절 관절낭-인대의 해부학적 연구. 역동학적 생체연구)

  • Park Tae-Soo;Choi Il-Yong;Joo Kyung-Bin;Kim Sun-Il;Kim Jun-Sic;Paik Doo-Jin
    • Journal of the Korean Arthroscopy Society
    • /
    • v.4 no.2
    • /
    • pp.154-158
    • /
    • 2000
  • Purpose : The purpose of this study is to demonstrate changes in the orientation ortho glenohumeral ligaments(GHL) in different degrees of abduction and rotation of the normal healthy individuals. Materials and Methods : Saline Magnetic Resonance(MR) arthrography of nine consecutive shoulders of normal healthy adults were checked. At that time, MR images were obtained in three different positions of abduction and external rotation($0^{\circ}C\;and\;0^{\circ},\;45^{\circ}C\;and\;25^{\circ}C,\;90^{\circ}$ and maximum, respectively). From a series of consecutive MRI, three-dimensional images were reconstructed after detecting the location of the middle glenohumeral ligament(MGHL) and the inferior glenohumeral ligament(IGHL) using workstation computer. Results : The shape of the MGHL was taken in double curved, and straight, and finally curved again in three different positions of the shoulder in sequence. On the other hand, the shape of the IGHL was obliquely positioned, and curvilinear, and finally straight and extended at lower part of the anterior surface of the humeral head. Conclusions : At $45^{\circ}$ of abduction and $25^{\circ}$ of external rotation, and at $90^{\circ}$ of abduction and maximal external rotation of the shoulder, the MGHL and the IGHL had the role of the most important static stabilizer of the glenohumeral joint repectively.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

A study of the cause of metal failure in treatment of femur shaft fracture - Fractographical and clinical analysis of metal failure- (대퇴골 간부 골절시 사용한 금속물의 금속부전(Metal failure)의 기전에 대한 연구)

  • Jeon, Chun-Bae;Seo, Jae-Sung;Ahn, Jong-Chul;Ahn, Myun-Whan;Ihn, Joo-Chyl
    • Journal of Yeungnam Medical Science
    • /
    • v.7 no.1
    • /
    • pp.81-93
    • /
    • 1990
  • The author fractographically analyized the cause of metal failure(the first time this procedure has been used for this metal failure)and also analyized it clinically. In this study, I selected eight cases which have been analyized fractographically. In all these cases, the analysis was done after treatment of metal failure of implants internally fixed to femur shaft fractures at the Department of Orthopedic Surgery, Yeung-Nam University Hospital during the six year period from May 1983 to September 1989. 1. Metal failure occured in five dynamic-compression plates, one Jewett nail, one screw in Rowe plate, and one interlocking nail. 2. The clinical cause of metal failure was deficiency of medial butress in five cases, incorrect position of implant in one case, and incorrect selection of implant in two cases. 3. The time interval between internal fixation and metal failure was four months in one case, between five months to twelve months in six cases, three years in one case. 4. The fractographically analytical cause of metal failure was ; first, impact failure, one case, second, fatigue failure, six cases, machining mark(stress liser), four cases type : low consistent cyclic fatigue failure irregular cyclic fatigue failure third, stress corrosion crack, one case. 5. 316L Stainless Steel has good resistance to corrosion. However, when its peculiar surface film is destroyed by fretting, it shows pitting corrosion. This is, perhaps, the main cause of metal failure. 6. It is possible that mechanical injury occured in implants during the manufacturing of implants or that making a screw hole is the main cause of metal failure.

  • PDF

The Effect of Rain on Traffic Flows in Urban Freeway Basic Segments (기상조건에 따른 도시고속도로 교통류변화 분석)

  • 최정순;손봉수;최재성
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.1
    • /
    • pp.29-39
    • /
    • 1999
  • An earlier study of the effect of rain found that the capacity of freeway systems was reduced, but did not address the effects of rain on the nature of traffic flows. Indeed, the substantial variation due to the intensity of adverse weather conditions is entirely rational so that its effects must be considered in freeway facility design. However, all of the data in Highway Capacity Manual(HCM) have come from ideal conditions. The primary objective of this study is to investigate the effect of rain on urban freeway traffic flows in Seoul. To do so, the relations between three key traffic variables(flow rates, speed, occupancy), their threshold values between congested and uncontested traffic flow regimes, and speed distribution were investigated. The traffic data from Olympic Expressway in Seoul were obtained from Imagine Detection System (Autoscope) with 30 seconds and 1 minute time periods. The slope of the regression line relating flow to occupancy in the uncongested regime decreases when it is raining. In essence, this result indicates that the average service flow rate (it may be interpreted as a capacity of freeway) is reduced as weather conditions deteriorate. The reduction is in the range between 10 and 20%, which agrees with the range proposed by 1994 US HCM. It is noteworthy that the service flow rates of inner lanes are relatively higher than those of other lanes. The average speed is also reduced in rainy day, but the flow-speed relationship and the threshold values of speed and occupancy (these are called critical speed and critical occupancy) are not very sensitive to the weather conditions.

  • PDF

Comparison of Blinking Patterns When Watching Ultra-high Definition Television: Normal versus Dry Eyes (초고선명 텔레비전 시청 시 정상안과 건성안에서의 눈깜박임 양상 비교)

  • Kang, Byeong Soo;Seo, Min Won;Yang, Hee Kyung;Seo, Jong Mo;Lee, Sanghoon;Hwang, Jeong-Min
    • Journal of The Korean Ophthalmological Society
    • /
    • v.58 no.6
    • /
    • pp.706-711
    • /
    • 2017
  • Purpose: To analyze blinking patterns when watching an ultra-high definition (UHD) television and to compare the results between normal eyes and dry eyes. Methods: A total of 59 participants aged from 13 to 69 years were instructed to watch a colorful and dynamic video on a UHD television for 10 minutes. Before and after watching the UHD television, we measured the best corrected visual acuities, autorefraction, tear-break-up-time, degree of corneal erosion and conjunctival hyperemia via slit lamp biomicroscopy. In addition, questionnaires for the evaluation of eye fatigue and symptoms of a dry eye were completed. The definition of dry eye syndrome was that the tear-break-up-time of one of the eyes was less than 5 seconds, conjunctival injection, or marked corneal erosion. The number of blinks and the duration of blinking were both measured and analyzed at the early and late phases of video-watching. Results: After watching the UHD television in the normal eye group, the tear-break-up-time was significantly decreased (p < 0.001) and the degree of corneal erosion was significantly increased (p = 0.023). However, the subjective symptoms of participants were not aggravated (p = 0.080). There were no significant differences in blinking patterns in the dry eye group. On the other hand, in the normal eye group, the mean blinking time was significantly increased (p = 0.030). Conclusions: Watching an UHD television changes the tear-break-up-time, degree of corneal erosion, and blinking pattern in normal eyes, which may increase the risk of dry eye syndrome.

Current Development of Company Law in the European Union (유럽주식회사법의 최근 동향에 관한 연구)

  • Choi, Yo-Sop
    • Journal of Legislation Research
    • /
    • no.41
    • /
    • pp.229-260
    • /
    • 2011
  • European Union (EU) law has been a complex but at the same time fascinating subject of study due to its dynamic evolution. In particular, the Lisbon Treaty which entered into force in December 2009 represents the culmination of a decade of attempts at Treaty reform and harmonisation in diverse sectors. Amongst the EU private law fields, company law harmonisation has been one of the hotly debated issues with regards to the freedom of establishment in the internal market. Due to the significant differences between national provisions on company law, it seemed somewhat difficult to harmonise company law. However, Council Regulation 2157/2001 was legislated in 2001 and now provides the basis for the Statute for a European Company (or Societas Europaea: SE). The Statute is also supplemented by the Council Directive 2001/86 on the involvement of employees. The SE Statute is a legal measure in order to contribute to the internal market, and provides a choice for companies that wish to merge, create a joint subsidiary or convert a subsidiary into an SE. Through this option, the SE became a corporate form which is only available to existing companies incorporated in different Member States in the EU. The important question on the meaning of the SE Statute is whether the distinctive characteristics of the SE make it an attractive option to ensure significant numbers of SE registration. In fact, the outcome that has been made through the SE Statute is an example of regulatory competition. The traditional regulatory competition in the freedom of establishment has been the one between national statutes between Member States. However, this time is not a competition between Member States, which means that the Union has joined the area in competition between legal orders and is now in competition with the systems of company law of the Member States.Key Words : European Union, EU Company Law, Societas Europaea, SE Statute, One-tier System, Two-tier System, Race to the Bottom A quite number of scholars expect that the number of SE will increase significantly. Of course, there is no evidence of regulatory competition that Korea faces currently. However, because of the increasing volume of international trade and expansion of regional economic bloc, it is necessary to consider the example of development of EU company law. Addition to the existing SE Statute, the EU Commission has also proposed a new corporate form, Societas Private Europaea (private limited liable company). All of this development in European company law will help firms make their best choice for company establishment. The Delaware-style development in the EU will foster the race to the bottom, thereby improving the contents of company law. To conclude, the study on the development of European company law becomes important to understand the evolution of company law and harmonisation efforts in the EU.

Changes in Biochemical Components of Several Tissues in Solen grandis, in Relation to Gonad Developmental Phases (대맛조개, Solen grandis의 생식소 발달 단계에 따른 일부 조직의 생화학적 성분변화)

  • Chung, Ee-Yung;Kim, Hyun-Jin;Kim, Jong-Bae;Lee, Chang-Hoon
    • The Korean Journal of Malacology
    • /
    • v.22 no.1 s.35
    • /
    • pp.27-38
    • /
    • 2006
  • We investigated the reproductive cycle with gonad developmental phases of Solen grandis by histological observations. Seasonal changes in biochemical components of the adductor muscle, visceral mass, foot muscle and mantle were studied by biochemical analysis, from January to December, 2005. The reproductive cycle of this species can be classified into five successive stages: early active stage (December to January), late active stage (January to March), ripe stage (March to July), partially spawned stage (June to July) and spent/inactive stage (July to December). Total protein content was the highest in the foot muscle, the content was high in January (early active stage), the lowest in April (ripe stage), and was the highest in August (partially spawned stage). In the visceral mass, total protein content began to increase in February (late active stage) and reached a maximum in March (ripe stage). Thereafter, it gradually decreased between June and July (partially spawned stage). There was a strong negative correlation in total protein contents between visceral mass and mantle (r = -0.594, p = 0.042). Meanwhile there was a positive correlation between the adductor muscle and foot muscle, the correlation was not statistically significant (r = 0.507, p = 0.093). Total lipid content was the highest in the visceral mass; it was more than 2 to 5-fold higher than that in the adductor muscle, foot muscle, and mantle. Monthly changes in total lipid content were also most dynamic in the visceral mass. It was relatively higher between January and February, showed a maximum in March (the ripe stage), decreased rapidly from April to July (ripe and partially spawned stage), and gradually decreased from September to December (spent/inactive stage). There was a strong positive correlation in total lipid content between foot muscle and adductor muscle (r = 0.639, p = 0.025). Tthough a negative correlation was found between visceral mass and mantle (r = -0.392), the correlation was not statistically significant (p = 0.208). Glycogen contents changed within relatively narrow range and were similar among different tissues. There was no statistically significant correlation in glycogen contents among tissues.

  • PDF

Key Methodologies to Effective Site-specific Accessment in Contaminated Soils : A Review (오염토양의 효과적 현장조사에 대한 주요 방법론의 검토)

  • Chung, Doug-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.32 no.4
    • /
    • pp.383-397
    • /
    • 1999
  • For sites to be investigated, the results of such an investigation can be used in determining foals for cleanup, quantifying risks, determining acceptable and unacceptable risk, and developing cleanup plans t hat do not cause unnecessary delays in the redevelopment and reuse of the property. To do this, it is essential that an appropriately detailed study of the site be performed to identify the cause, nature, and extent of contamination and the possible threats to the environment or to any people living or working nearby through the analysis of samples of soil and soil gas, groundwater, surface water, and sediment. The migration pathways of contaminants also are examined during this phase. Key aspects of cost-effective site assessment to help standardize and accelerate the evaluation of contaminated soils at sites are to provide a simple step-by-step methodology for environmental science/engineering professionals to calculate risk-based, site-specific soil levels for contaminants in soil. Its use may significantly reduce the time it takes to complete soil investigations and cleanup actions at some sites, as well as improve the consistency of these actions across the nation. To achieve the effective site assessment, it requires the criteria for choosing the type of standard and setting the magnitude of the standard come from different sources, depending on many factors including the nature of the contamination. A general scheme for site-specific assessment consists of sequential Phase I, II, and III, which is defined by workplan and soil screening levels. Phase I are conducted to identify and confirm a site's recognized environmental conditions resulting from past actions. If a Phase 1 identifies potential hazardous substances, a Phase II is usually conducted to confirm the absence, or presence and extent, of contamination. Phase II involve the collection and analysis of samples. And Phase III is to remediate the contaminated soils determined by Phase I and Phase II. However, important factors in determining whether a assessment standard is site-specific and suitable are (1) the spatial extent of the sampling and the size of the sample area; (2) the number of samples taken: (3) the strategy of taking samples: and (4) the way the data are analyzed. Although selected methods are recommended, application of quantitative methods is directed by users having prior training or experience for the dynamic site investigation process.

  • PDF

Exploring Opinions on University Online Classes During the COVID-19 Pandemic Through Twitter Opinion Mining (트위터 오피니언 마이닝을 통한 코로나19 기간 대학 비대면 수업에 대한 의견 고찰)

  • Kim, Donghun;Jiang, Ting;Zhu, Yongjun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.4
    • /
    • pp.5-22
    • /
    • 2021
  • This study aimed to understand how people perceive the transition from offline to online classes at universities during the COVID-19 pandemic. To achieve the goal, we collected tweets related to online classes on Twitter and performed sentiment and time series topic analysis. We have the following findings. First, through the sentiment analysis, we found that there were more negative than positive opinions overall, but negative opinions had gradually decreased over time. Through exploring the monthly distribution of sentiment scores of tweets, we found that sentiment scores during the semesters were more widespread than the ones during the vacations. Therefore, more diverse emotions and opinions were showed during the semesters. Second, through time series topic analysis, we identified five main topics of positive tweets that include class environment and equipment, positive emotions, places of taking online classes, language class, and tests and assignments. The four main topics of negative tweets include time (class & break time), tests and assignments, negative emotions, and class environment and equipment. In addition, we examined the trends of public opinions on online classes by investigating the changes in topic composition over time through checking the proportions of representative keywords in each topic. Different from the existing studies of understanding public opinions on online classes, this study attempted to understand the overall opinions from tweet data using sentiment and time series topic analysis. The results of the study can be used to improve the quality of online classes in universities and help universities and instructors to design and offer better online classes.

Comparative Assessment of Linear Regression and Machine Learning for Analyzing the Spatial Distribution of Ground-level NO2 Concentrations: A Case Study for Seoul, Korea (서울 지역 지상 NO2 농도 공간 분포 분석을 위한 회귀 모델 및 기계학습 기법 비교)

  • Kang, Eunjin;Yoo, Cheolhee;Shin, Yeji;Cho, Dongjin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1739-1756
    • /
    • 2021
  • Atmospheric nitrogen dioxide (NO2) is mainly caused by anthropogenic emissions. It contributes to the formation of secondary pollutants and ozone through chemical reactions, and adversely affects human health. Although ground stations to monitor NO2 concentrations in real time are operated in Korea, they have a limitation that it is difficult to analyze the spatial distribution of NO2 concentrations, especially over the areas with no stations. Therefore, this study conducted a comparative experiment of spatial interpolation of NO2 concentrations based on two linear-regression methods(i.e., multi linear regression (MLR), and regression kriging (RK)), and two machine learning approaches (i.e., random forest (RF), and support vector regression (SVR)) for the year of 2020. Four approaches were compared using leave-one-out-cross validation (LOOCV). The daily LOOCV results showed that MLR, RK, and SVR produced the average daily index of agreement (IOA) of 0.57, which was higher than that of RF (0.50). The average daily normalized root mean square error of RK was 0.9483%, which was slightly lower than those of the other models. MLR, RK and SVR showed similar seasonal distribution patterns, and the dynamic range of the resultant NO2 concentrations from these three models was similar while that from RF was relatively small. The multivariate linear regression approaches are expected to be a promising method for spatial interpolation of ground-level NO2 concentrations and other parameters in urban areas.