• Title/Summary/Keyword: 순차적 해석기법

Search Result 88, Processing Time 0.023 seconds

Relationship Between Hopelessness and Suicidal Ideation Among Psychiatric Patients: The Mediating Effect of Sleep Quality and Interpretation Bias for Ambiguity (정신건강의학과 환자의 절망감과 자살사고의 관계: 수면의 질과 모호함에 대한 해석 편향의 매개효과)

  • Somi Yun;Eunkyung Kim;Daeho Kim;Yongchon Park
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.31 no.2
    • /
    • pp.100-107
    • /
    • 2023
  • Objectives : This study aimed to examine the mediating effect of sleep quality and interpretation bias for ambiguity in the relationship between hopelessness and suicidal ideation in psychiatric patients. Methods : A total of 231 psychiatric outpatients and inpatients completed the Beck Hopelessness Scale, Pittsburgh Sleep Quality Index, Ambiguous/Unambiguous Situations Diary-Extended Version, and Ultra-Short Suicidal Ideation Scale. Data analysis was conducted using regression analyses and bootstrap sampling. Results : The results of this study showed that hopelessness had a direct effect on suicidal ideation, and that sleep quality and interpretation bias for ambiguity mediated the association between hopelessness and suicidal ideation. Moreover, there was a significant double mediating effect of sleep quality and interpretation bias for ambiguity on the relationship between hopelessness and suicidal ideation. Conclusions : These results suggest that it is important to consider both sleep quality and interpretation bias for ambiguity to prevent hopelessness from leading to suicidal idea. These results suggest that considering both sleep quality and interpretation bias for ambiguity may be important in preventing hopelessness from leading to suicidal ideation.

Three-Dimensional Digital Subtraction Angiography (디지털 혈관 조영술 영상의 3차원적 해석)

  • 이승지;김희찬
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.1
    • /
    • pp.63-71
    • /
    • 1983
  • A dye-edge tracking algorithm was used to determine the corresponding points in the two images(anterior-posterior and lateral) of the digital subtraction biplane angiography. This correspondence was used to reconstruct three dimensional images of cerebral artery in a dog experiment. The method was tested by comparing the measured image of oblique view with the computed reconstructed image. For the present study, we have developed three new algorithms. The first algorithm is to determine the corresponding dye-edge points using the fact the dye density at the moving edge avows the same changing pattern in the two projection views. This moving pattern of dye-edge density is computed using a matching method of cross-correlation for the two sequential frames' dye density. The second algorithm is for simplified perspective transformation, and the third one is to identify the specific corresponding points on the small vessels. The present method can be applied to compute the blood velocity using the dye-edge displacement and the three- dimensional distance data.

  • PDF

A Study on Hull-Form Design for Ships Operated at Two Speeds (두 가지 속도에서 운항하는 선박의 형상설계에 관한 연구)

  • Kim, Tae Hoon;Choi, Hee Jong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.4
    • /
    • pp.467-474
    • /
    • 2018
  • The purpose of this study is related to automatic hull-form design for ships operating at two speeds. Research was conducted using a series 60 ($C_B=0.6$) ship as a target, which has the most basic ship hull-form. Hull-form development was pursued from the viewpoint of improving resistance performance. In particular, automatic hull-form design for a ship was performed to improve wave resistance, which is closely related to hull-forms. For this purpose, we developed automatic hull-form design software for ships by combining an optimization technique, resistance prediction technique and hull-form modification technique, appling the software developed to a target ship. A sequential quadratic programming method was used for optimization, and a potential-based panel method was used to predict resistance performance. A Gaussian-type modification function was developed and applied to change the ship hull-form. The software developed was used to design a target ship operating at two different speeds, and the performance of the resulting optimized hull was compared with the results of the original hull. In order to verify the validity of the program developed, experimental results obtained in model tests were compared with calculated values by numerical analysis.

Parallel Processing Based Decompositon Technique for Efficient Collaborative Optimization (효율적 분산협동최적설계를 위한 병렬처리 기반 분해 기법)

  • Park, Hyeong-Uk;Kim, Seong-Chan;Kim, Min-Su;Choe, Dong-Hun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.25 no.5
    • /
    • pp.883-890
    • /
    • 2001
  • In practical design studies, most of designers solve multidisciplinary problems with large size and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several multidisciplinary analysis subsystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and multidisciplinary design optimization (MDO) methodology.

Time-split Mixing Model for Analysis of 2D Advection-Dispersion in Open Channels (개수로에서 2차원 이송-분산 해석을 위한 시간분리 혼합 모형)

  • Jung, Youngjai;Seo, Il Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.2
    • /
    • pp.495-506
    • /
    • 2013
  • This study developed the Time-split Mixing Model (TMM) which can represent the pollutant mixing process on a three-dimensional open channel through constructing the conceptual model based on Taylor's assumption (1954) that the shear flow dispersion is the result of combination of shear advection and diffusion by turbulence. The developed model splits the 2-D mixing process into longitudinal mixing and transverse mixing, and it represents the 2-D advection-dispersion by the repetitive calculation of concentration separation by the vertical non-uniformity of flow velocity and then vertical mixing by turbulent diffusion sequentially. The simulation results indicated that the proposed model explains the effect of concentration overlapping by boundary walls, and the simulated concentration was in good agreement with the analytical solution of the 2-D advection-dispersion equation in Taylor period (Chatwin, 1970). The proposed model could explain the correlation between hydraulic factors and the dispersion coefficient to provide the physical insight about the dispersion behavior. The longitudinal dispersion coefficient calculated by the TMM varied with the mixing time unlike the constant value suggested by Elder (1959), whereas the transverse dispersion coefficient was similar with the coefficient evaluated by experiments of Sayre and Chang (1968), Fischer et al. (1979).

Geostatistical Approach to Integrated Modeling of Iron Mine for Evaluation of Ore Body (철광산의 광체 평가를 위한 지구통계학적 복합 모델링)

  • Ahn, Taegyu;Oh, Seokhoon;Kim, Kiyeon;Suh, Baeksoo
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.4
    • /
    • pp.177-189
    • /
    • 2012
  • Evaluation of three-dimensional ore body modeling has been performed by applying the geostatistical integration technique to multiple geophysical (electrical resistivity, MT) and geological (borehole data, physical properties of core) information. It was available to analyze the resistivity range in borehole and other area through multiple geophysical data. A correlation between resistivity and density from physical properties test of core was also analyzed. In the case study results, the resistivity value of ore body is decreased contrast to increase of the density, which seems to be related to a reason that the ore body (magnetite) includes heavy conductive component (Fe) in itself. Based on the lab test of physical properties in iron mine region, various geophysical, geological and borehole data were used to provide ore body modeling, that is electrical resistivity, MT, physical properties data, borehole data and grade data obtained from borehole data. Of the various geostatistical techniques for the integrated data analysis, in this study, the SGS (sequential Gaussian simulation) method was applied to describe the varying non-homogeneity depending on region through the realization that maintains the mean and variance. With the geostatistical simulation results of geophysical, geological and grade data, the location of residual ore body and ore body which is previously reported was confirmed. In addition, another highly probable region of iron ore bodies was estimated deeper depth in study area through integrated modeling.

Experimental Implementation of Continuous GPS Data Processing Procedure on Near Real-Time Mode for High-Precision of Medium-Range Kinematic Positioning Applications (고정밀 중기선 동적측위 분야 응용을 위한 GPS 관측데이터 준실시간 연속 처리절차의 실험적 구현)

  • Lee, Hungkyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.31-40
    • /
    • 2017
  • This paper deals with the high precision of GPS measurement reduction and its implementation on near real-time and kinematic mode for those applications requiring centimeter-level precision of the estimated coordinates, even if target stations are a few hundred kilometers away from their references. We designed the system architecture, data streaming and processing scheme. Intensive investigation was performed to determine the characteristics of the GPS medium-range functional model, IGS infrastructure and some exemplary systems. The designed system consisted of streaming and processing units; the former automatically collects GPS data through Ntrip and IGS ultra-rapid products by FTP connection, whereas the latter handles the reduction of GPS observables on static and kinematic mode to a time series of the target stations' 3D coordinates. The data streaming unit was realized by a DOS batch file, perl script and BKG's BNC program, whereas the processing unit was implemented by definition of a process control file of BPE. To assess the functionality and precision of the positional solutions, an experiment was carried out against a network comprising seven GPS stations with baselines ranging from a few hundred up to a thousand kilometers. The results confirmed that the function of the whole system properly operated as designed, with a precision better than ${\pm}1cm$ in each of the positional component with 95% confidence level.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.