• Title/Summary/Keyword: initial value problem

Search Result 373, Processing Time 0.029 seconds

The Evaluation of an additional Weight Shoe's Function developed for the Improvement of Aerobic Capacity (유산소 운동능력 향상을 위한 중량물 부가 신발의 기능성 평가)

  • Kwak, Chang-Soo;Kim, Hee-Suk
    • Korean Journal of Applied Biomechanics
    • /
    • v.14 no.3
    • /
    • pp.67-82
    • /
    • 2004
  • The purpose of this study was to evaluate the function and the safety of an additional weight shoe developed for the improvement of aerobic capacity, and to improve some problems found by subject's test for an additional weight shoe. The subjects employed for this study were 10 college students. 4 video cameras, AMTI force platform and Pedar insole pressure distribution measurement device were used to analyze foot motions. The results of the study were as follows: 1 The initial achilles tendon angle and initial rearfoot pronation angle of an additional weight shoe during walking were 183.7 deg and 2.33 deg, respectively, and smaller than a barefoot condition. Maximum achilles tendon angle and the angular displacement of achilles tendon angle were 185.35 deg and 4.21 deg respectively, and smaller than barefoot condition. Thus rearfoot stability variables were within the permission value for safety. 2. Maximal anterior posterior ground reaction force of additional weight shoe was appeared to be 1.01-1.2 B.W., and was bigger than a barefoot condition. The time to MAPGRF of an additional weight shoe was longer than a barefoot condition. Maximal vertical ground reaction force of additional weight shoe was appeared to be 2.3-2.7 B.W., and was bigger than a barefoot condition in propulsive force region. But A barefoot condition was bigger in braking force region. The time to MVGRF of an additional weight shoe was longer than a barefoot condition. 3. Regional peak pressure was bigger in medial region than in lateral region in contrast to conventional running shoes. The instant of regional peak pressure was M1-M2-M7-M4-M6-M5 -M3, and differed form conventional running shoes. Regional Impulse was shown to be abnormal patterns. There were no evidences that an additional weight shoe would have function and safety problems through the analysis of rearfoot control and ground reaction force during walking. However, There appeared to have small problem in pressure distribution. It was considered that it would be possible to redesign the inner geometry. This study could not find out safety on human body and exercise effects because of short term research period. Therefore long term study on subject's test would be necessary in the future study.

Performance of High Temperature Filter System for Radioactive Waste Vitrification Plant (방사성폐기물 유리화 플랜트 고온여과시스템의 성능 특성)

  • Seung-Chul, Park;Tae-Won, Hwang;Sang-Woon, Shin;Jong-Hyun, Ha;Hey-Suk, Kim;So-Jin, Park
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.2 no.3
    • /
    • pp.201-209
    • /
    • 2004
  • Important operation parameters and performance of a high temperature ceramic candle filter system were evaluated through a series of demonstration tests at a pilot-scale vitrification plant. At the initial period of each test, due to the growth of dust cake on the surface of ceramic candles, the pressure drop across the filter media increased sharply. After that it became stable to a certain range and varied continuously proportion to the face velocity of off-gas. On the contrary, at the initial period of each test, the permeability of filter element decreased rapidly and then it became stable. Back flushing of the filter system was effective under the back flushing air pressure range of 3∼5 bar. Based on the dust concentrations measured by iso-kinetic dust sampling at the inlet and outlet point of HTF, the dust collection efficiency of HTF evaluated. The result met the designed performance value of 99.9%. During the demonstration tests including a hundred hour long test, no specific failure or problem affecting the performance of HTF system were observed.

  • PDF

A STUDY ON FRACTURE STRENGTH OF COLLARLESS METAL CERAMIC CROWN WITH DIFFERENT METAL COPING DESIGN (금속코핑 설계에 따른 Collarless Metal Ceramic Crown의 파절강도에 관한 연구)

  • Yun, Jong-Wook;Yang, Jae-Ho;Chang, Ik-Tae;Lee, Sun-Hyung;Chung, Hun-Young
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.37 no.4
    • /
    • pp.454-464
    • /
    • 1999
  • The metal ceramic crown is currently the most popular complete veneer restoration in dentistry, but in many cases, the metal cervical collar at the facial margin is unesthetic and unacceptable. Facial porcelain margin has been used in place of it. But this dose not solve the problems, such as dark gingival discoloration and cervical opaque reflection of porcelain veneer. Recently, metal copings which were designed to terminate its labio-cervical end on the axial walls coronal to the shoulder have been clinically used to solve the esthetic problem of metal ceramic crown. But in this design, porcelain veneer of labio-cervical area which is not supported by metal may not be able to resist the stress during cementation and mastication. The purpose of this study was to evaluate fracture strength and fractured appearance of crowns according to different coping designs. A resin maxillary left central incisor analogue was prepared for a metal ceramic crown, and metal dies were made with duplication mold. Metal copings were made and assigned to one of four groups based on facial framework designs: group 1, coping with 0.5mm metal collar; group 2, metal extended to the shoulder; group 3, metal extended to 1mm coronal tn the shoulder: group 4, metal extended to 2mm coronal to the shoulder. Copings and crowns were adjusted to be same size and thickness, and cemented to metal dies with zinc phosphate cement by finger pressure. Fracture strength was measured with Instron Universal Testing Machine. Metal dies were anchored in Three-way-vice at 3mm below finish line and at $130^{\circ}$ inclined to the long axis of the crown. Load was directed lingually at 2mm below midincisal edge. Load value at initial crack and at catastrophic fracture was recorded. The results obtained were as follows : 1. Fracture strength values at initial crack were higher in groups 1, 2 than in groups 3, 4 but this difference was not statistically significant(P<0.05). 2. Conventional metal collared crown had greater catastrophic fracture strength than any other collarless crowns. 3. The greater the labial metal coping reduction, the lower the catastrophic fracture strength of crowns but when more than 1mm of labial metal reduction was done, the difference in strengths was not statistically significant(p<0.05). 4. The strongest collarless coping design was group 2.

  • PDF

Improvement of Residual Delay Compensation Algorithm of KJJVC (한일상관기의 잔차 지연 보정 알고리즘의 개선)

  • Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Oh, Chung-Sik;Jung, Jin-Seung;Chung, Dong-Kyu;Oyama, Tomoaki;Kawaguchi, Noriyuki;Kobayashi, Hideyuki;Kawakami, Kazuyuki;Ozeki, Kensuke;Onuki, Hirohumi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.2
    • /
    • pp.136-146
    • /
    • 2013
  • In this paper, the residual delay compensation algorithm is proposed for FX-type KJJVC. In case of initial version as that design algorithm of KJJVC, the integer calculation and the cos/sin table for the phase compensation coefficient were introduced in order to speed up of calculation. The mismatch between data timing and residual delay phase and also between bit-jump and residual delay phase were found and fixed. In final design of KJJVC residual delay compensation algorithm, the initialization problem on the rotation memory of residual delay compensation was found when the residual delay compensated value was applied to FFT-segment, and this problem is also fixed by modifying the FPGA code. Using the proposed residual delay compensation algorithm, the band shape of cross power spectrum becomes flat, which means there is no significant loss over the whole bandwidth. To verify the effectiveness of proposed residual delay compensation algorithm, we conducted the correlation experiments for real observation data using the simulator and KJJVC. We confirmed that the designed residual delay compensation algorithm is well applied in KJJVC, and the signal to noise ratio increases by about 8%.

Determination of Unit Hydrograph for the Hydrological Modelling of Long-term Run-off in the Major River Systems in Korea (장기유출의 수문적 모형개발을 위한 주요 수계별 단위도 유도)

  • 엄병현;박근수
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.26 no.4
    • /
    • pp.52-65
    • /
    • 1984
  • In general precise estimation of hourly of daily distribution of the long-term run-off should be very important in a design of source of irrigation. However, there have not been a satisfying method for forecasting of stationar'y long-term run-off in Korea. Solving this problem, this study introduces unit-hydrograph method frequently used in short-term run-off analysis into the long-term run-off analysis, of which model basin was selected to be Sumgin-river catchment area. In the estimation of effective rainfall, conventional method neglects the Soil moisture condition of catchment area, but in this study, the initial discharge (qb) occurred just before rising phase of the hydrograph was selected as the index of a basin soil moisture condition and then introduced as 3rd variable in the analysis of the reationship between cumulative rainfall and cumulative loss of rainfall, which built a new type of separation method of effective rainfall. In next step, in order to normalize significant potential error included in hydrological data, especially in vast catchment area, Snyder's correlation method was applied. A key to solution in this study is multiple correlation method or multiple regressional analysis, which is primarily based on the method of least squres and which is solved by the form of systems of linear equations. And for verification of the change of characteristics of unit hydrograph according to the variation of a various kind of hydrological charateristics (for example, precipitation, tree cover, soil condition, etc),seasonal unit hydrograph models of dry season(autumn, winter), semi-dry season (spring), rainy season (summer) were made respectively. The results obtained in this study were summarized as follows; 1.During the test period of 1966-1971, effective rainfall was estimated for the total 114 run-off hydrograph. From this estimation results, relative error of estimation to the ovservation value was 6%, -which is mush smaller than 12% of the error of conventional method. 2.During the test period, daily distribution of long-term run-off discharge was estimated by the unit hydrograph model. From this estimation results, relative error of estimation by the application of standard unit hydrograph model was 12%. When estimating by each seasonal unit bydrograph model, the relative error was 14% during dry season 10% during semi-dry season and 7% during rainy season, which is much smaller than 37% of conventional method. Summing up the analysis results obtained above, it is convinced that qb-index method of this study for the estimation of effective rainfall be preciser than any other method developed before. Because even recently no method has been developed for the estimation of daily distribution of long-term run-off dicharge, therefore estimation value by unit hydrograph model was only compared with that due to kaziyama method which estimates monthly run-off discharge. However this method due to this study turns out to have high accuracy. If specially mentioned from the results of this study, there is no need to use each seasonal unit hydrograph model separately except the case of semi-dry season. The author hopes to analyze the latter case in future sudies.

  • PDF

Ideal Right Ventricular Outflow Tract Size in Tetralogy of Fallot Total Correction (팔로네징후 완전교정술 시 이상적인 우심실 유출로 크기에 관한 분석)

  • Kim Jin-Sun;Choi Jin-Ho;Yang Ji-Hyuk;Park Pyo-Won;Youm Wook;Jun Tae-Gook
    • Journal of Chest Surgery
    • /
    • v.39 no.8 s.265
    • /
    • pp.588-597
    • /
    • 2006
  • Background: The surgical repair of a tetralogy of Fallot (TOF) has been performed successfully with a favorable early and late outcome. However, the later development of pulmonary regurgitation and stenosis remains a problem. The development of pulmonary regurgitation and stenosis may be changed by the size of right ventricular outflow tract (RVOT) reconstruction at the initial total correction. Hence, it is necessary to investigate the ideal size of RVOT reconstruction. Material and Method: This prospective study was carried out to determine how a surgical strategy and the RVOT size affect the occurrence of pulmonary regurgitation and stenosis. From January 2002 to December 2004, 62 patients underwent the TOF total correction. The RVOT size (diameter of pulmonary valve annulus) of each case was measured after the RVOT reconstruction and converted to a Z value. A pre-scheduled follow up (at discharge, 6 months, 1 year, 2 years and 3 years) was carried out by echocardiography to evaluate the level of pulmonary regurgitation and stenosis. Result: The patients were divided to two groups (transannular group n=12, nontransannular group n=50) according to the method of a RVOT reconstruction. The Z value of RVOT=iameter of pulmonary valve annulus) (transannular group -1, $range\;-3.6{\sim}-0.8;$ nontransannular group -2.1, $range\;-5.2{\sim}-1.5)$ and the average pRV/LV after surgery ${(transannular group 0.44{\pm}0.09,\;nontransannular group\;0.42{\pm}0.09)}$ did not show any significant difference between two groups. The occurrence of pulmonary regurgitation above a moderate degree was more frequent in the transannular group (p<0.01). In nontransannular group, the development of pulmonary regurgitation more than moderate degree occurred to the patients with larger RVOT size (Z value>0, p<0.02) and the progressing pulmonary stenosis more than mild to moderate degree developed in the patients with smaller RVOT size (Z value<-1.5, p<0.05). A moderate degree of pulmonary stenosis developed for 4 nontransannular patients. Three underwent additional surgery and one underwent a balloon valvuloplasty. Their Z value of RVOT were -3.8, -3.8 -2.9, -1.8, respectively. Conclusion: When carring out a TOF total correction, transannular RVOT reconstruction group has significantly more pulmonary regurgitation. In the nontransannular RVOT reconstruction. the size of the RVOT should be maintained from Z value -1.5 to 0. If the Z value is less than -1.5, we should follow up carefully for the possibility of pulmonary stenosis.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • A study on improving self-inference performance through iterative retraining of false positives of deep-learning object detection in tunnels (터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구)

    • Kyu Beom Lee;Hyu-Soung Shin
      • Journal of Korean Tunnelling and Underground Space Association
      • /
      • v.26 no.2
      • /
      • pp.129-152
      • /
      • 2024
    • In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper's false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-term performance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.

    Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

    • Ryang, Heungmo;Yun, Unil
      • Journal of Internet Computing and Services
      • /
      • v.14 no.6
      • /
      • pp.1-8
      • /
      • 2013
    • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

    A Step-by-Step Primality Test (단계적 소수 판별법)

    • Lee, Sang-Un
      • The Journal of the Institute of Internet, Broadcasting and Communication
      • /
      • v.13 no.3
      • /
      • pp.103-109
      • /
      • 2013
    • Miller-Rabin method is the most prevalently used primality test. However, this method mistakenly reports a Carmichael number or semi-prime number as prime (strong lier) although they are composite numbers. To eradicate this problem, it selects k number of m, whose value satisfies the following : m=[2,n-1], (m,n)=1. The Miller-Rabin method determines that a given number is prime, given that after the computation of $n-1=2^sd$, $0{\leq}r{\leq}s-1$, the outcome satisfies $m^d{\equiv}1$(mod n) or $m^{2^rd}{\equiv}-1$(mod n). This paper proposes a step-by-step primality testing algorithm that restricts m=2, hence achieving 98.8% probability. The proposed method, as a first step, rejects composite numbers that do not satisfy the equation, $n=6k{\pm}1$, $n_1{\neq}5$. Next, it determines prime by computing $2^{2^{s-1}d}{\equiv}{\beta}_{s-1}$(mod n) and $2^d{\equiv}{\beta}_0$(mod n). In the third step, it tests ${\beta}_r{\equiv}-1$ in the range of $1{\leq}r{\leq}s-2$ for ${\beta}_0$ > 1. In the case of ${\beta}_0$ = 1, it retests m=3,5,7,11,13,17 sequentially. When applied to n=[101,1000], the proposed algorithm determined 96.55% of prime in the initial stage. The remaining 3% was performed for ${\beta}_0$ >1 and 0.55% for ${\beta}_0$ = 1.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.