• 제목/요약/키워드: a-level set decomposition

검색결과 40건 처리시간 0.032초

프로세스 영역 의존성을 이용한 TMMi 레벨 1 단계화 방안 (A Decomposition Method for TMMi Maturity Level 1 using Process Area Dependency Analysis)

  • 김선준;류성열;오기성
    • 한국컴퓨터정보학회논문지
    • /
    • 제15권12호
    • /
    • pp.189-196
    • /
    • 2010
  • 국내 소프트웨어 테스트 성숙도 수준은 TMMi 기준으로 대부분 레벨 2 이하다. 성숙도 개선의 첫째 조건은 현재 성숙도 수준을 정확히 아는데 있다. TMMi에는 레벨 1 정의가 없지만, 같은 레벨 1 조직이라도 성숙도 수준은 분명 차이가 있다. 이에 본 연구는 레벨 1 조직의 성숙도 수준을 정확히 파악하고, 레벨 1 조직이 개선 노력을 줄이면서 레벨 2를 달성하는 방안을 제시한다. 레벨 2에서 의존성이 있는 서브 프랙티스를 해당 프로세스 영역과 그룹화해서 레벨 1을 3단계로 새롭게 정의했다. 의존성을 이용한 이유는 의존성 있는 프랙티스 끼리 묶어 프로세스를 개선하면 프랙티스 여러 개를 한꺼번에 달성하는 효과를 얻을 수 있기 때문이다. 3단계화 적정성을 검증해서 레벨 1 조직의 성숙도 수준을 정확히 평가했고, 다음 단계 개선 목표와 방향을 구체적으로 설정할 수 있음을 알았다.

양극단 지연시간의 분할을 이용한 분산 실시간 시스템의 설계 (Designing Distributed Real-Time Systems with Decomposition of End-to-End Timing Donstraints)

  • 홍성수
    • 제어로봇시스템학회논문지
    • /
    • 제3권5호
    • /
    • pp.542-554
    • /
    • 1997
  • In this paper, we present a resource conscious approach to designing distributed real-time systems as an extension of our original approach [8][9] which was limited to single processor systems. Starting from a given task graph and a set of end-to-end constraints, we automatically generate task attributes (e.g., periods and deadlines) such that (i) the task set is schedulable, and (ii) the end-to-end timing constraints are satisfied. The method works by first transforming the end-to-end timing constraints into a set of intermediate constraints on task attributes, and then solving the intermediate constraints. The complexity of constraint solving is tackled by reducing the problem into relatively tractable parts, and then solving each sub-problem using heuristics to enhance schedulability. In this paper, we build on our single processor solution and show how it can be extended for distributed systems. The extension to distributed systems reveals many interesting sub-problems, solutions to which are presented in this paper. The main challenges arise from end-to-end propagation delay constraints, and therefore this paper focuses on our solutions for such constraints. We begin with extending our communication scheme to provide tight delay bounds across a network, while hiding the low-level details of network communication. We also develop an algorithm to decompose end-to-end bounds into local bounds on each processor of making extensive use of relative load on each processor. This results in significant decoupling of constraints on each processor, without losing its capability to find a schedulable solution. Finally, we show, how each of these parts fit into our overall methodology, using our previous results for single processor systems.

  • PDF

Deriving the Effective Atomic Number with a Dual-Energy Image Set Acquired by the Big Bore CT Simulator

  • Jung, Seongmoon;Kim, Bitbyeol;Kim, Jung-in;Park, Jong Min;Choi, Chang Heon
    • Journal of Radiation Protection and Research
    • /
    • 제45권4호
    • /
    • pp.171-177
    • /
    • 2020
  • Background: This study aims to determine the effective atomic number (Zeff) from dual-energy image sets obtained using a conventional computed tomography (CT) simulator. The estimated Zeff can be used for deriving the stopping power and material decomposition of CT images, thereby improving dose calculations in radiation therapy. Materials and Methods: An electron-density phantom was scanned using Philips Brilliance CT Big Bore at 80 and 140 kVp. The estimated Zeff values were compared with those obtained using the calibration phantom by applying the Rutherford, Schneider, and Joshi methods. The fitting parameters were optimized using the nonlinear least squares regression algorithm. The fitting curve and mass attenuation data were obtained from the National Institute of Standards and Technology. The fitting parameters obtained from stopping power and material decomposition of CT images, were validated by estimating the residual errors between the reference and calculated Zeff values. Next, the calculation accuracy of Zeff was evaluated by comparing the calculated values with the reference Zeff values of insert plugs. The exposure levels of patients under additional CT scanning at 80, 120, and 140 kVp were evaluated by measuring the weighted CT dose index (CTDIw). Results and Discussion: The residual errors of the fitting parameters were lower than 2%. The best and worst Zeff values were obtained using the Schneider and Joshi methods, respectively. The maximum differences between the reference and calculated values were 11.3% (for lung during inhalation), 4.7% (for adipose tissue), and 9.8% (for lung during inhalation) when applying the Rutherford, Schneider, and Joshi methods, respectively. Under dual-energy scanning (80 and 140 kVp), the patient exposure level was approximately twice that in general single-energy scanning (120 kVp). Conclusion: Zeff was calculated from two image sets scanned by conventional single-energy CT simulator. The results obtained using three different methods were compared. The Zeff calculation based on single-energy exhibited appropriate feasibility.

지능형 제품설계 시스템 개발을 위한 자동변속기 레버 구조부의 기능분해 (Function Decomposition of Structural Part in Automatic Transmission Lever for the Development of Intelligent Product Design System)

  • 하상도;김원기;고희병;차성운
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2001년도 춘계학술대회 논문집
    • /
    • pp.622-626
    • /
    • 2001
  • Every design activity has a goal of satisfying a set of functional requirement. The commencement of a design, therefore, must have its foundation upon the identification of the functional requirements. Many of the design practices of the industrial examples can be categorized as design of small systems that are defined to have limited and fixed sets of functional requirements to be satisfied at all times. In the case of small systems it facilitates the construction of a knowledge-based system for a specific purpose to decompose the functional requirement and map ones in the lowest level into specific design features. When the number of design features is large, they need to be managed by groups. This paper suggests a grouping method for design process can be regarded as a series of selections of the predefined functional primitives according to the requirements and the preceding selections. An intelligent product design system for automatic transmission lever design is developed as an example.

  • PDF

Goal-oriented multi-collision source algorithm for discrete ordinates transport calculation

  • Wang, Xinyu;Zhang, Bin;Chen, Yixue
    • Nuclear Engineering and Technology
    • /
    • 제54권7호
    • /
    • pp.2625-2634
    • /
    • 2022
  • Discretization errors are extremely challenging conundrums of discrete ordinates calculations for radiation transport problems with void regions. In previous work, we have presented a multi-collision source method (MCS) to overcome discretization errors, but the efficiency needs to be improved. This paper proposes a goal-oriented algorithm for the MCS method to adaptively determine the partitioning of the geometry and dynamically change the angular quadrature in remaining iterations. The importance factor based on the adjoint transport calculation obtains the response function to get a problem-dependent, goal-oriented spatial decomposition. The difference in the scalar fluxes from one high-order quadrature set to a lower one provides the error estimation as a driving force behind the dynamic quadrature. The goal-oriented algorithm allows optimizing by using ray-tracing technology or high-order quadrature sets in the first few iterations and arranging the integration order of the remaining iterations from high to low. The algorithm has been implemented in the 3D transport code ARES and was tested on the Kobayashi benchmarks. The numerical results show a reduction in computation time on these problems for the same desired level of accuracy as compared to the standard ARES code, and it has clear advantages over the traditional MCS method in solving radiation transport problems with reflective boundary conditions.

3 자유도 물고기 로봇의 동적해석 및 운동파라미터 최적화에 관한 연구 (A Study on Optimization of Motion Parameters and Dynamic Analysis for 3-D.O.F Fish Robot)

  • 김형석;;이병룡;유호영
    • 대한기계학회논문집A
    • /
    • 제33권10호
    • /
    • pp.1029-1037
    • /
    • 2009
  • Recently, the technologies of mobile robots have been growing rapidly in the fields such as cleaning robot, explosive ordnance disposal robot, patrol robot, etc. However, the researches about the autonomous underwater robots have not been done so much, and they still remain at the low level of technology. This paper describes a model of 3-joint (4 links) fish robot type. Then we calculate the dynamic motion equation of this fish robot and use Singular Value Decomposition (SVD) method to reduce the divergence of fish robot's motion when it operates in the underwater environment. And also, we analysis response characteristic of fish robot according to the parameters of input torque function and compare characteristic of fish robot with 3 joint and fish robot with 2 joint. Next, fish robot's maximum velocity is optimized by using the combination of Hill Climbing Algorithm (HCA) and Genetic Algorithm (GA). HCA is used to generate the good initial population for GA and then use GA is used to find the optimal parameters set that give maximum propulsion power in order to make fish robot swim at the fastest velocity.

Impulse Response of Inflation to Economic Growth Dynamics: VAR Model Analysis

  • DINH, Doan Van
    • The Journal of Asian Finance, Economics and Business
    • /
    • 제7권9호
    • /
    • pp.219-228
    • /
    • 2020
  • The study investigates the impact of inflation rate on economic growth to find the best-fit model for economic growth in Vietnam. The study applied Vector Autoregressive (VAR), cointegration models, and unit root test for the time-series data from 1996 to 2018 to test the inflation impact on the economic growth in the short and long term. The study showed that the two variables are stationary at lag first difference I(1) with 1%, 5% and 10%; trace test indicates two cointegrating equations at the 0.05 level, the INF does not granger cause GDP, the optimal lag I(1) and the variables are closely related as R2 is 72%. It finds that the VAR model's results are the basis to perform economic growth; besides, the inflation rate is positively related to economic growth. The results support the monetary policy. This study identifies issues for Government to consider: have a comprehensive solution among macroeconomic policies, monetary policy, fiscal policy and other policies to control and maintain the inflation and stimulate growth; set a priority goal for sustainable economic growth; not pursue economic growth by maintaining the inflation rate in the long term, but take appropriate measures to stabilize the inflation at the best-fitted VAR forecast model.

한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성 (Korean Sentence Generation Using Phoneme-Level LSTM Language Model)

  • 안성만;정여진;이재준;양지헌
    • 지능정보연구
    • /
    • 제23권2호
    • /
    • pp.71-88
    • /
    • 2017
  • 언어모델은 순차적으로 입력된 자료를 바탕으로 다음에 나올 단어나 문자를 예측하는 모델로 언어처리나 음성인식 분야에 활용된다. 최근 딥러닝 알고리즘이 발전되면서 입력 개체 간의 의존성을 효과적으로 반영할 수 있는 순환신경망 모델과 이를 발전시킨 Long short-term memory(LSTM) 모델이 언어모델에 사용되고 있다. 이러한 모형에 자료를 입력하기 위해서는 문장을 단어 혹은 형태소로 분해하는 과정을 거친 후 단어 레벨 혹은 형태소 레벨의 모형을 사용하는 것이 일반적이다. 하지만 이러한 모형은 텍스트가 포함하는 단어나 형태소의 수가 일반적으로 매우 많기 때문에 사전 크기가 커지게 되고 이에 따라 모형의 복잡도가 증가하는 문제가 있고 사전에 포함된 어휘 외에는 생성이 불가능하다는 등의 단점이 있다. 특히 한국어와 같이 형태소 활용이 다양한 언어의 경우 형태소 분석기를 통한 분해과정에서 오류가 더해질 수 있다. 이를 보완하기 위해 본 논문에서는 문장을 자음과 모음으로 이루어진 음소 단위로 분해한 뒤 입력 데이터로 사용하는 음소 레벨의 LSTM 언어모델을 제안한다. 본 논문에서는 LSTM layer를 3개 또는 4개 포함하는 모형을 사용한다. 모형의 최적화를 위해 Stochastic Gradient 알고리즘과 이를 개선시킨 다양한 알고리즘을 사용하고 그 성능을 비교한다. 구약성경 텍스트를 사용하여 실험을 진행하였고 모든 실험은 Theano를 기반으로 하는 Keras 패키지를 사용하여 수행되었다. 모형의 정량적 비교를 위해 validation loss와 test set에 대한 perplexity를 계산하였다. 그 결과 Stochastic Gradient 알고리즘이 상대적으로 큰 validation loss와 perplexity를 나타냈고 나머지 최적화 알고리즘들은 유사한 값들을 보이며 비슷한 수준의 모형 복잡도를 나타냈다. Layer 4개인 모형이 3개인 모형에 비해 학습시간이 평균적으로 69% 정도 길게 소요되었으나 정량지표는 크게 개선되지 않거나 특정 조건에서는 오히려 악화되는 것으로 나타났다. 하지만 layer 4개를 사용한 모형이 3개를 사용한 모형에 비해 완성도가 높은 문장을 생성했다. 본 논문에서 고려한 어떤 시뮬레이션 조건에서도 한글에서 사용되지 않는 문자조합이 생성되지 않았고 명사와 조사의 조합이나 동사의 활용, 주어 동사의 결합 면에서 상당히 완성도 높은 문장이 발생되었다. 본 연구결과는 현재 대두되고 있는 인공지능 시스템의 기초가 되는 언어처리나 음성인식 분야에서 한국어 처리를 위해 다양하게 활용될 수 있을 것으로 기대된다.

Damage detection in truss structures using a flexibility based approach with noise influence consideration

  • Miguel, Leandro Fleck Fadel;Miguel, Leticia Fleck Fadel;Riera, Jorge Daniel;Menezes, Ruy Carlos Ramos De
    • Structural Engineering and Mechanics
    • /
    • 제27권5호
    • /
    • pp.625-638
    • /
    • 2007
  • The damage detection process may appear difficult to be implemented for truss structures because not all degrees of freedom in the numerical model can be experimentally measured. In this context, the damage locating vector (DLV) method, introduced by Bernal (2002), is a useful approach because it is effective when operating with an arbitrary number of sensors, a truncated modal basis and multiple damage scenarios, while keeping the calculation in a low level. In addition, the present paper also evaluates the noise influence on the accuracy of the DLV method. In order to verify the DLV behavior under different damages intensities and, mainly, in presence of measurement noise, a parametric study had been carried out. Different excitations as well as damage scenarios are numerically tested in a continuous Warren truss structure subjected to five noise levels with a set of limited measurement sensors. Besides this, it is proposed another way to determine the damage locating vectors in the DLV procedure. The idea is to contribute with an alternative option to solve the problem with a more widespread algebraic method. The original formulation via singular value decomposition (SVD) is replaced by a common solution of an eigenvector-eigenvalue problem. The final results show that the DLV method, enhanced with the alternative solution proposed in this paper, was able to correctly locate the damaged bars, using an output-only system identification procedure, even considering small intensities of damage and moderate noise levels.

Seismic vulnerability assessment of a historical building in Tunisia

  • El-Borgi, S.;Choura, S.;Neifar, M.;Smaoui, H.;Majdoub, M.S.;Cherif, D.
    • Smart Structures and Systems
    • /
    • 제4권2호
    • /
    • pp.209-220
    • /
    • 2008
  • A methodology for the seismic vulnerability assessment of historical monuments is presented in this paper. The ongoing work has been conducted in Tunisia within the framework of the FP6 European Union project (WIND-CHIME) on the use of appropriate modern seismic protective systems in the conservation of Mediterranean historical buildings in earthquake-prone areas. The case study is the five-century-old Zaouia of Sidi Kassem Djilizi, located downtown Tunis, the capital of Tunisia. Ambient vibration tests were conducted on the case study using a number of force-balance accelerometers placed at selected locations. The Enhanced Frequency Domain Decomposition (EFDD) technique was applied to extract the dynamic characteristics of the monument. A 3-D finite element model was developed and updated to obtain reasonable correlation between experimental and numerical modal properties. The set of parameters selected for the updating consists of the modulus of elasticity in each wall element of the finite element model. Seismic vulnerability assessment of the case study was carried out via three-dimensional time-history dynamic analyses of the structure. Dynamic stresses were computed and damage was evaluated according to a masonry specific plane failure criterion. Statistics on the occurrence, location and type of failure provide a general view for the probable damage level and mode. Results indicate a high vulnerability that confirms the need for intervention and retrofit.