• Title/Summary/Keyword: Optimal Algorithm

Search Result 6,788, Processing Time 0.038 seconds

A Study of Guide System for Cerebrovascular Intervention (뇌혈관 중재시술 지원 가이드 시스템에 관한 연구)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Yoon, Kwon-Ha;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.17 no.1
    • /
    • pp.101-107
    • /
    • 2016
  • Due to the recent advancement in digital imaging technology, development of intervention equipment has become generalize. Video arbitration procedure is a process to insert a tiny catheter and a guide wire in the body, so in order to enhance the effectiveness and safety of this treatment, the high-quality of x-ray of image should be used. However, the increasing of radiation has become the problem. Therefore, the studies to improve the performance of x-ray detectors are being actively processed. Moreover, this intervention is based on the reference of the angiographic imaging and 3D medical image processing. In this paper, we propose a guidance system to support this intervention. Through this intervention, it can solve the problem of the existing 2D medical images based vessel that has a formation of cerebrovascular disease, and guide the real-time tracking and optimal route to the target lesion by intervention catheter and guide wire tool. As a result, the system was completely composed for medical image acquisition unit and image processing unit as well as a display device. The experimental environment, guide services which are provided by the proposed system Brain Phantom (complete intracranial model with aneurysms, ref H+N-S-A-010) was taken with x-ray and testing. To generate a reference image based on the Laplacian algorithm for the image processing which derived from the cerebral blood vessel model was applied to DICOM by Volume ray casting technique. $A^*$ algorithm was used to provide the catheter with a guide wire tracking path. Finally, the result does show the location of the catheter and guide wire providing in the proposed system especially, it is expected to provide a useful guide for future intervention service.

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

The effects of physical factors in SPECT (물리적 요소가 SPECT 영상에 미치는 영향)

  • 손혜경;김희중;나상균;이희경
    • Progress in Medical Physics
    • /
    • v.7 no.1
    • /
    • pp.65-77
    • /
    • 1996
  • Using the 2-D and 3-D Hoffman brain phantom, 3-D Jaszczak phantom and Single Photon Emission Computed Tomography, the effects of data acquisition parameter, attenuation, noise, scatter and reconstruction algorithm on image quantitation as well as image quality were studied. For the data acquisition parameters, the images were acquired by changing the increment angle of rotation and the radius. The less increment angle of rotation resulted in superior image quality. Smaller radius from the center of rotation gave better image quality, since the resolution degraded as increasing the distance from detector to object increased. Using the flood data in Jaszczak phantom, the optimal attenuation coefficients were derived as 0.12cm$\^$-1/ for all collimators. Consequently, the all images were corrected for attenuation using the derived attenuation coefficients. It showed concave line profile without attenuation correction and flat line profile with attenuation correction in flood data obtained with jaszczak phantom. And the attenuation correction improved both image qulity and image quantitation. To study the effects of noise, the images were acquired for 1min, 2min, 5min, 10min, and 20min. The 20min image showed much better noise characteristics than 1min image indicating that increasing the counting time reduces the noise characteristics which follow the Poisson distribution. The images were also acquired using dual-energy windows, one for main photopeak and another one for scatter peak. The images were then compared with and without scatter correction. Scatter correction improved image quality so that the cold sphere and bar pattern in Jaszczak phantom were clearly visualized. Scatter correction was also applied to 3-D Hoffman brain phantom and resulted in better image quality. In conclusion, the SPECT images were significantly affected by the factors of data acquisition parameter, attenuation, noise, scatter, and reconstruction algorithm and these factors must be optimized or corrected to obtain the useful SPECT data in clinical applications.

  • PDF

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Comparison between REML and Bayesian via Gibbs Sampling Algorithm with a Mixed Animal Model to Estimate Genetic Parameters for Carcass Traits in Hanwoo(Korean Native Cattle) (한우의 도체형질 유전모수 추정을 위한 REML과 Bayesian via Gibbs Sampling 방법의 비교 연구)

  • Roh, S.H.;Kim, B.W.;Kim, H.S.;Min, H.S.;Yoon, H.B.;Lee, D.H.;Jeon, J.T.;Lee, J.G.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.5
    • /
    • pp.719-728
    • /
    • 2004
  • The aims of this study were to estimate genetic parameters for carcass traits on Hanwoo(Korean Native Cattle) and to compare two different statistical algorithms for estimating genetic parameters. Data obtained from 1526 steers at Hanwoo Improvement Center and Hanwoo Improvement Complex Area from 1996 to 2001 were used for the analyses. The carcass traits considered in these studies were carcass weight, dressing percent, eye muscle area, backfat thickness, and marbling score. Estimated genetic parameters using EM-REML algorithm were compared to those by Bayesian inference via Gibbs Sampling to find out statistical properties. The estimated heritabilities of carcass traits by REML method were 0.28, 0.25, 0.35, 0.39 and 0.51, respectively and those by Gibbs Sampling method were 0.29, 0.25, 0.40, 0.42 and 0.54, respectively. This estimates were not significantly different, even though the estimated heritabilities by Gibbs Sampling method were higher than ones by REML method. Since the estimated statistics by REML method and Gibbs Sampling method were not significantly different in this study, it is inferred that both mothods could be efficiently applied for the analysis of carcass traits of cattle. However, further studies are demanded to define an optimal statistical method for handling large scale performance data.

A User Optimer Traffic Assignment Model Reflecting Route Perceived Cost (경로인지비용을 반영한 사용자최적통행배정모형)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.117-130
    • /
    • 2005
  • In both deteministic user Optimal Traffic Assignment Model (UOTAM) and stochastic UOTAM, travel time, which is a major ccriterion for traffic loading over transportation network, is defined by the sum of link travel time and turn delay at intersections. In this assignment method, drivers actual route perception processes and choice behaviors, which can become main explanatory factors, are not sufficiently considered: therefore may result in biased traffic loading. Even though there have been some efforts in Stochastic UOTAM for reflecting drivers' route perception cost by assuming cumulative distribution function of link travel time, it has not been fundamental fruitions, but some trials based on the unreasonable assumptions of Probit model of truncated travel time distribution function and Logit model of independency of inter-link congestion. The critical reason why deterministic UOTAM have not been able to reflect route perception cost is that the route perception cost has each different value according to each origin, destination, and path connection the origin and destination. Therefore in order to find the optimum route between OD pair, route enumeration problem that all routes connecting an OD pair must be compared is encountered, and it is the critical reason causing computational failure because uncountable number of path may be enumerated as the scale of transportation network become bigger. The purpose of this study is to propose a method to enable UOTAM to reflect route perception cost without route enumeration between an O-D pair. For this purpose, this study defines a link as a least definition of path. Thus since each link can be treated as a path, in two links searching process of the link label based optimum path algorithm, the route enumeration between OD pair can be reduced the scale of finding optimum path to all links. The computational burden of this method is no more than link label based optimum path algorithm. Each different perception cost is embedded as a quantitative value generated by comparing the sub-path from the origin to the searching link and the searched link.

Development of Estimation Equation for Minimum and Maximum DBH Using National Forest Inventory (국가산림자원조사 자료를 이용한 최저·최고 흉고직경 추정식 개발)

  • Kang, Jin-Taek;Yim, Jong-Su;Lee, Sun-Jeoung;Moon, Ga-Hyun;Ko, Chi-Ung
    • Journal of agriculture & life science
    • /
    • v.53 no.6
    • /
    • pp.23-33
    • /
    • 2019
  • In accordance with a change in the management information system containing the management record and planning for the entire national forest in South Korea by an amendment of the relevant law (The national forest management planning and methods, Korea Forest Service), in this study, average, the maximum, and the minimum values for DBH were presented while only average values were required before the amendment. In this regard, there is a need for an estimation algorithm by which all the existing values for DBH established before the revision can be converted to the highest and the lowest ones. The purpose of this study is to develop an estimation equation to automatically show the minimum and the maximum values for DBH for 12 main tree species from the data in the national forest management information system. In order to develop the estimation equation for the minimum and the maximum values for DBH, there was exploited the 6,858 fixed sample plots of the fifth and the sixth national forest inventory between in 2006 and 2015. Two estimation models were applied for DBH-tree age and DHB-tree height using such growth variables as DBH, tree age, and height, to draw the estimation equation for the maximum and the minimum values for DBH. The findings showed that the most suitable model to estimate the minimum and the maximum values for DBH was Dmin=a+bD+cH, Dmax=a+bD+cH with the variables of DBH and height. Based on these optimal models, the estimation equation was devised for the minimum and the maximum values for DBH for the 12 main tree species.

Utilizing the Idle Railway Sites: A Proposal for the Location of Solar Power Plants Using Cluster Analysis (철도 유휴부지 활용방안: 군집분석을 활용한 태양광발전 입지 제안)

  • Eunkyung Kang;Seonuk Yang;Jiyoon Kwon;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.79-105
    • /
    • 2023
  • Due to unprecedented extreme weather events such as global warming and climate change, many parts of the world suffer from severe pain, and economic losses are also snowballing. In order to address these problems, 'The Paris Agreement' was signed in 2016, and an intergovernmental consultative body was formed to keep the average temperature rise of the Earth below 1.5℃. Korea also declared 'Carbon Neutrality in 2050' to prevent climate catastrophe. In particular, it was found that the increase in temperature caused by greenhouse gas emissions hurts the environment and society as a whole, as well as the export-dependent economy of Korea. In addition, as the diversification of transportation types is accelerating, the change in means of choice is also increasing. As the development paradigm in the low-growth era changes to urban regeneration, interest in idle railway sites is rising due to reduced demand for routes, improvement of alignment, and relocation of urban railways. Meanwhile, it is possible to partially achieve the solar power generation goal of 'Renewable Energy 3020' by utilizing already developed but idle railway sites and take advantage of being free from environmental damage and resident acceptance issues surrounding the location; but the actual use and plan for these solar power facilities are still lacking. Therefore, in this study, using the big data provided by the Korea National Railway and the Renewable Energy Cloud Platform, we develop an algorithm to discover and analyze suitable idle sites where solar power generation facilities can be installed and identify potentially applicable areas considering conditions desired by users. By searching and deriving these idle but relevant sites, it is intended to devise a plan to save enormous costs for facilities or expansion in the early stages of development. This study uses various cluster analyses to develop an optimal algorithm that can derive solar power plant locations on idle railway sites and, as a result, suggests 202 'actively recommended areas.' These results would help decision-makers make rational decisions from the viewpoint of simultaneously considering the economy and the environment.

Development and assessment of pre-release discharge technology for response to flood on deteriorated reservoirs dealing with abnormal weather events (이상기후대비 노후저수지 홍수 대응을 위한 사전방류 기술개발 및 평가)

  • Moon, Soojin;Jeong, Changsam;Choi, Byounghan;Kim, Seungwook;Jang, Daewon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.775-784
    • /
    • 2023
  • With the increasing trend of extreme rainfall that exceeds the design frequency of man-made structures due to extreme weather, it is necessary to review the safety of agricultural reservoirs designed in the past. However, there are no local government-managed reservoirs (13,685) that can be discharged in an emergency, except for reservoirs over a certain size under the jurisdiction of the Korea Rural Affairs Corporation. In this case, it is important to quickly deploy a mobile siphon to the site for preliminary discharge, and this study evaluated the applicability of a mobile siphon with a diameter of 200 mm, a minimum water level difference of 6 m, 420 (m2/h), and 10,000 (m2/day), which can perform both preliminary and emergency discharge functions, to the Yugum Reservoir in Gyeongju City. The test bed, Yugum Reservoir, is a facility that was completed in 1945 and has been in use for about 78 years. According to the hydrological stability analysis, the lowest height of the current dam crest section is 27.15 (EL.m), which is 0.29m lower than the reviewed flood level of 27.44 (EL.m), indicating that there is a possibility of lunar flow through the embankment, and the headroom is insufficient by 1.72 m, so it was reviewed as not securing hydrological safety. The water level-volume curve was arbitrarily derived because it was difficult to clearly establish the water level-flow relationship curve of the reservoir since the water level-flow measurement was not carried out regularly, and based on the derived curve, the algorithm for operating small and medium-sized old reservoirs was developed to consider the pre-discharge time, the amount of spillway discharge, and to predict the reservoir lunar flow time according to the flood volume by frequency, thereby securing evacuation time in advance and reducing the risk of collapse. Based on one row of 200 mm diameter mobile siphons, the optimal pre-discharge time to secure evacuation time (about 1 hour) while maintaining 80% of the upper limit water level (about 30,000 m2) during a 30-year flood was analyzed to be 12 hours earlier. If the pre-discharge technology utilizing siphons for small and medium-sized old reservoirs and the algorithm for reservoir operation are implemented in advance in case of abnormal weather and the decision-making of managers is supported, it is possible to secure the safety of residents in the risk area of reservoir collapse, resolve the anxiety of residents through the establishment of a support system for evacuating residents, and reduce risk factors by providing risk avoidance measures in the event of a reservoir risk situation.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.