• 제목/요약/키워드: Problem analysis

검색결과 16,166건 처리시간 0.052초

트리즈로 분석한 코로나19 대응 선별진료소의 진화 (The Evolution of Screening Center for COVID-19 Analyzed by TRIZ)

  • 송창룡
    • 산업경영시스템학회지
    • /
    • 제45권3호
    • /
    • pp.139-149
    • /
    • 2022
  • Korea's Corona 19(COVID-19) quarantine, referred to as 'K-Quarantine', is a globally recognized quarantine system that has achieved both conflicting goals: health and economy. The quarantine system represented by 3T(Test-Trace-Treat) is not a method of blocking an area, but a method of screening and treating infected and non-infected persons. The screening center, one of the key elements of this screening treatment system, has evolved to suit the timing and situation of COVID-19, and has succeeded in initial response by conducting large-scale tests quickly and safely. By analyzing the evolution of screening centers that produced such significant results from a problem-solving point of view, it proved its meaning as a practical success case of creative problem-solving. In addition, the usefulness of TRIZ (Russian abbreviation of Theory of Solving Inventive Problem), a creative problem-solving theory, was confirmed through an analysis of actual verified cases of COVID-19 response. TRIZ is a problem-solving theory created by analyzing the regularity of invention patents, and is widely used not only in the technical field but also in the non-technical fields such as design, management, and education. The results of this study are expected to provide useful meaning and practical examples to researchers interested in system analysis and TRIZ application from a problem-solving perspective.

문헌정보학에서 문제중심학습 (Problem-Based Learning) 적용 연구 I - 설계 모형 적용과 성찰일지 분석을 중심으로 - (A Study on the Application of PBL in Library and Information Science I: Course Developing and Analysis of Self-Reflective Journal)

  • 강지혜
    • 한국비블리아학회지
    • /
    • 제28권4호
    • /
    • pp.321-340
    • /
    • 2017
  • 본 연구는 국내 문헌정보학 수업에서 문제중심학습(Problem-Based Learning) 모형을 활용할 수 있는 수업 모형을 설계하였으며, 실제 강의실 현장에서 적용한 뒤 학생들이 느끼는 교육적 효과를 분석하였다. 본 연구는 기존 연구 분석을 통하여 문제해결 방안 초안을 작성하였다. 전문가의 자문을 통해 시나리오 수정하는 단계를 거쳤다. 문제는 분석단계활동(요구분석, PBL 수업적합성 판단, 내용분석, 학습자분석, 환경분석, PBL 운영환경 결정, PBL 수업형태 결정)과 설계단계활동(문제상황설계, 학습자원설계, 문제해결과정촉진설계, 운영전략설계, 평가설계, PBL 운영환경설계)을 통해 도출되었다. 초기 설정된 시나리오를 바탕으로 1차 문제상황 수업을 진행한 뒤 학습자들의 성찰일지를 통해 문제중심학습의 결과를 일차적으로 분석하였다. 학습자들의 성찰일지를 통해 연구자는 첫번째 PBL 문제상황에서 비판적 사고력과 창의력이 증진되었음을 확인할 수 있었으며, 원활한 의사소통과 협력의 방법이 고안/활용되었음을 알 수 있었다. 첫 번째 문제 상황 수업 후 교육 효과를 분석하고 수정사항을 수렴한 연구 결과는 교과설계의 2차 수정 및 보완에 활용할 예정이다. 본 연구는 PBL 모델 개발 사례를 소개하여 향후 후속 수업적용과 연구를 기대하게 한다.

Local Similarity based Discriminant Analysis for Face Recognition

  • Xiang, Xinguang;Liu, Fan;Bi, Ye;Wang, Yanfang;Tang, Jinhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권11호
    • /
    • pp.4502-4518
    • /
    • 2015
  • Fisher linear discriminant analysis (LDA) is one of the most popular projection techniques for feature extraction and has been widely applied in face recognition. However, it cannot be used when encountering the single sample per person problem (SSPP) because the intra-class variations cannot be evaluated. In this paper, we propose a novel method called local similarity based linear discriminant analysis (LS_LDA) to solve this problem. Motivated by the "divide-conquer" strategy, we first divide the face into local blocks, and classify each local block, and then integrate all the classification results to make final decision. To make LDA feasible for SSPP problem, we further divide each block into overlapped patches and assume that these patches are from the same class. To improve the robustness of LS_LDA to outliers, we further propose local similarity based median discriminant analysis (LS_MDA), which uses class median vector to estimate the class population mean in LDA modeling. Experimental results on three popular databases show that our methods not only generalize well SSPP problem but also have strong robustness to expression, illumination, occlusion and time variation.

Verification of Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE)

  • Khuwaileh, Bassam;Williams, Brian;Turinsky, Paul;Hartanto, Donny
    • Nuclear Engineering and Technology
    • /
    • 제51권4호
    • /
    • pp.968-976
    • /
    • 2019
  • This paper presents a number of verification case studies for a recently developed sensitivity/uncertainty code package. The code package, ROMUSE (Reduced Order Modeling based Uncertainty/Sensitivity Estimator) is an effort to provide an analysis tool to be used in conjunction with reactor core simulators, in particular the Virtual Environment for Reactor Applications (VERA) core simulator. ROMUSE has been written in C++ and is currently capable of performing various types of parameter perturbations and associated sensitivity analysis, uncertainty quantification, surrogate model construction and subspace analysis. The current version 2.0 has the capability to interface with the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) code, which gives ROMUSE access to the various algorithms implemented within DAKOTA, most importantly model calibration. The verification study is performed via two basic problems and two reactor physics models. The first problem is used to verify the ROMUSE single physics gradient-based range finding algorithm capability using an abstract quadratic model. The second problem is the Brusselator problem, which is a coupled problem representative of multi-physics problems. This problem is used to test the capability of constructing surrogates via ROMUSE-DAKOTA. Finally, light water reactor pin cell and sodium-cooled fast reactor fuel assembly problems are simulated via SCALE 6.1 to test ROMUSE capability for uncertainty quantification and sensitivity analysis purposes.

순차 범주형 데이타의 최적 모수 설계를 위한 분석법 개발 (Development of Analysis Method of Ordered Categorical Data for Optimal Parameter Design)

  • 전태준;박호일;홍남표;최성조
    • 대한산업공학회지
    • /
    • 제20권1호
    • /
    • pp.27-38
    • /
    • 1994
  • Accumulation analysis is difficult to analyze the ordered categorical data except smaller-the-better type problem. The purpose of this paper is to develop the statistic and method that can be easily applied to general type of problem, including nominal-the-best type problem. The experimental data of contact window process is analyzed and new procedure is compared with accumulation analysis.

  • PDF

A Penalized Principal Component Analysis using Simulated Annealing

  • Park, Chongsun;Moon, Jong Hoon
    • Communications for Statistical Applications and Methods
    • /
    • 제10권3호
    • /
    • pp.1025-1036
    • /
    • 2003
  • Variable selection algorithm for principal component analysis using penalty function is proposed. We use the fact that usual principal component problem can be expressed as a maximization problem with appropriate constraints and we will add penalty function to this maximization problem. Simulated annealing algorithm is used in searching for optimal solutions with penalty functions. Comparisons between several well-known penalty functions through simulation reveals that the HARD penalty function should be suggested as the best one in several aspects. Illustrations with real and simulated examples are provided.

고유치 문제의 확률 유한요소 해석(Frame 구조물의 좌굴 신뢰성 해석) (Probabilistic Finite Element Analysis of Eigenvalue Problem(Buckling Reliability Analysis of Frame Structure))

  • 양영순;김지호
    • 한국전산구조공학회:학술대회논문집
    • /
    • 한국전산구조공학회 1990년도 가을 학술발표회 논문집
    • /
    • pp.22-27
    • /
    • 1990
  • Since an eigenvalue problem in structural analysis has been recognized as an important process for the assessment of structural strength, it is usually to be carried out the eigenvalue analysis or buckling analysis of structures when the compression behabiour of the member is dorminant. In general, various variables involved in the eigenvalue problem have also shown their variability. So it is natural to apply the probabilistic analysis into such problem. Since the limit state equation for the eigenvalue analysis or buckling reliability analysis is expressed implicitly in terms of random variables involved, the probabilistic finite element method is combined with the conventional reliability method such as MVFOSM and AFOSM for the determination of probability of failure due to buckling. The accuracy of the results obtained by this method is compared with results from the Monte Carlo simulations. Importance sampling method is specially chosen for overcomming the difficulty in a large simulation number needed for appropriate accurate result. From the results of the case study, it is found that the method developed here has shown good performance for the calculation of probability of buckling failure and could be used for checking the safety of the calculation of probability of buckling failure and could be used for checking the safely of frame structure which might be collapsed by either yielding or buckling.

  • PDF

Incremental Eigenspace Model Applied To Kernel Principal Component Analysis

  • Kim, Byung-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권2호
    • /
    • pp.345-354
    • /
    • 2003
  • An incremental kernel principal component analysis(IKPCA) is proposed for the nonlinear feature extraction from the data. The problem of batch kernel principal component analysis(KPCA) is that the computation becomes prohibitive when the data set is large. Another problem is that, in order to update the eigenvectors with another data, the whole eigenvectors should be recomputed. IKPCA overcomes this problem by incrementally updating the eigenspace model. IKPCA is more efficient in memory requirement than a batch KPCA and can be easily improved by re-learning the data. In our experiments we show that IKPCA is comparable in performance to a batch KPCA for the classification problem on nonlinear data set.

  • PDF

Latent Mean Analysis of Health Behavior between Adolescents with a Health Problem and Those without: Using the 2009 Korean Youth Health Behavior Survey

  • Park, Jeong-Mo;Kim, Mi-Won;Cho, Yoon Hee
    • 지역사회간호학회지
    • /
    • 제24권4호
    • /
    • pp.488-497
    • /
    • 2013
  • Purpose: The purpose of this study was to identify the construct equivalence of the general five factors of health behavior and to compare the latent means between adolescents with a health problem and those without in Korea. Methods: The 2009 KYRBS (Korean Youth Risk Behavior Survey) data were used for the analysis. Multi-group confirmatory factor analysis was performed to test whether the scale had configural, metric, and scalar invariances across the existence of health problems in adolescents. Results: Configural, metric, and factor invariances were satisfied for the latent mean analysis (LMA) between adolescents with health problem and those without. Adolescents with health problem and those without were not different in the LMA of all factors. Conclusion: Health providers should give more interest to the group of adolescents with health problems and consider prudential school life to the same group.

An Exploration on the Use of Data Envelopment Analysis for Product Line Selection

  • Lin, Chun-Yu;Okudan, Gul E.
    • Industrial Engineering and Management Systems
    • /
    • 제8권1호
    • /
    • pp.47-53
    • /
    • 2009
  • We define product line (or mix) selection problem as selecting a subset of potential product variants that can simultaneously minimize product proliferation and maintain market coverage. Selecting the most efficient product mix is a complex problem, which requires analyses of multiple criteria. This paper proposes a method based on Data Envelopment Analysis (DEA) for product line selection. Data Envelopment Analysis (DEA) is a linear programming based technique commonly used for measuring the relative performance of a group of decision making units with multiple inputs and outputs. Although DEA has been proved to be an effective evaluation tool in many fields, it has not been applied to solve the product line selection problem. In this study, we construct a five-step method that systematically adopts DEA to solve a product line selection problem. We then apply the proposed method to an existing line of staplers to provide quantitative evidence for managers to generate desirable decisions to maximize the company profits while also fulfilling market demands.