• Title/Summary/Keyword: On-line algorithm

Search Result 2,419, Processing Time 0.036 seconds

An Application of loop-loop EM Method for Geotechnical Survey (지반조사를 위한 loop-loop 전자탐사 기법의 적용)

  • You Jin-Sang;Song Yoonho;Seo1 Soon-Jee;Song Young-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.4 no.2
    • /
    • pp.25-33
    • /
    • 2001
  • Loop-loop electromagnetic (EM) survey in frequency domain has been carried out in order to provide basic solution to geotechnical applications. Source and receiver configuration may be horizontal co-planar (HCP) and/or vertical co-planar (VCP). Three quadrature components of mutual impedance ratio for each configuration are used to construct the subsurface image. For the purpose of obtaining the model response and validating the reasonable performance of the inversion, we obtained each responses of two-layered and three-layered earth models and two-dimensional (2-D) isolated anomalous body. The response of 2-D isolated anomalous body has been calculated using extended Born approximation for the solution of 2.5-D integral equation describing EM scattering problem. As a result of the least-squares inversion with variable Lagrangian multiplier, we could construct more resolvable image from HCP data than VCP data. Furthermore, joint inversion of HCP and VCP data made better stability and resolution of the inversion. Resistivity values, however, did not exactly match the true ones. Loop-loop EM field data was obtained with EM34-3XL system manufactured by Geonics Ltd. (Canada). Electrical resistivity survey was conducted on the same line for the comparison in advance. Since the constructed image from loop-loop EM data by 2-D inversion algorithm showed almost similar resistivity distribution to that from electrical resistivity one, we expect the developed 2.5-D loop-loop EM inversion program can be applied for the reconnaissance site survey.

  • PDF

Accuracy of posteroanterior cephalogram landmarks and measurements identification using a cascaded convolutional neural network algorithm: A multicenter study

  • Sung-Hoon Han;Jisup Lim;Jun-Sik Kim;Jin-Hyoung Cho;Mihee Hong;Minji Kim;Su-Jung Kim;Yoon-Ji Kim;Young Ho Kim;Sung-Hoon Lim;Sang Jin Sung;Kyung-Hwa Kang;Seung-Hak Baek;Sung-Kwon Choi;Namkug Kim
    • The korean journal of orthodontics
    • /
    • v.54 no.1
    • /
    • pp.48-58
    • /
    • 2024
  • Objective: To quantify the effects of midline-related landmark identification on midline deviation measurements in posteroanterior (PA) cephalograms using a cascaded convolutional neural network (CNN). Methods: A total of 2,903 PA cephalogram images obtained from 9 university hospitals were divided into training, internal validation, and test sets (n = 2,150, 376, and 377). As the gold standard, 2 orthodontic professors marked the bilateral landmarks, including the frontozygomatic suture point and latero-orbitale (LO), and the midline landmarks, including the crista galli, anterior nasal spine (ANS), upper dental midpoint (UDM), lower dental midpoint (LDM), and menton (Me). For the test, Examiner-1 and Examiner-2 (3-year and 1-year orthodontic residents) and the Cascaded-CNN models marked the landmarks. After point-to-point errors of landmark identification, the successful detection rate (SDR) and distance and direction of the midline landmark deviation from the midsagittal line (ANS-mid, UDM-mid, LDM-mid, and Me-mid) were measured, and statistical analysis was performed. Results: The cascaded-CNN algorithm showed a clinically acceptable level of point-to-point error (1.26 mm vs. 1.57 mm in Examiner-1 and 1.75 mm in Examiner-2). The average SDR within the 2 mm range was 83.2%, with high accuracy at the LO (right, 96.9%; left, 97.1%), and UDM (96.9%). The absolute measurement errors were less than 1 mm for ANS-mid, UDM-mid, and LDM-mid compared with the gold standard. Conclusions: The cascaded-CNN model may be considered an effective tool for the auto-identification of midline landmarks and quantification of midline deviation in PA cephalograms of adult patients, regardless of variations in the image acquisition method.

Patient Setup Aid with Wireless CCTV System in Radiation Therapy (무선 CCTV 시스템을 이용한 환자 고정 보조기술의 개발)

  • Park, Yang-Kyun;Ha, Sung-Whan;Ye, Sung-Joon;Cho, Woong;Park, Jong-Min;Park, Suk-Won;Huh, Soon-Nyung
    • Radiation Oncology Journal
    • /
    • v.24 no.4
    • /
    • pp.300-308
    • /
    • 2006
  • $\underline{Purpose}$: To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. $\underline{Materials\;and\;Methods}$: In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal (${\sim}2.4\;GHz$ and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images to investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. $\underline{Results}$: More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of $1.5{\pm}0.7\;mm$ with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of ${\sim}0.17\;sec$. $\underline{Conclusion}$: The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful.

GIS based Development of Module and Algorithm for Automatic Catchment Delineation Using Korean Reach File (GIS 기반의 하천망분석도 집수구역 자동 분할을 위한 알고리듬 및 모듈 개발)

  • PARK, Yong-Gil;KIM, Kye-Hyun;YOO, Jae-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.126-138
    • /
    • 2017
  • Recently, the national interest in environment is increasing and for dealing with water environment-related issues swiftly and accurately, the demand to facilitate the analysis of water environment data using a GIS is growing. To meet such growing demands, a spatial network data-based stream network analysis map(Korean Reach File; KRF) supporting spatial analysis of water environment data was developed and is being provided. However, there is a difficulty in delineating catchment areas, which are the basis of supplying spatial data including relevant information frequently required by the users such as establishing remediation measures against water pollution accidents. Therefore, in this study, the development of a computer program was made. The development process included steps such as designing a delineation method, and developing an algorithm and modules. DEM(Digital Elevation Model) and FDR(Flow Direction) were used as the major data to automatically delineate catchment areas. The algorithm for the delineation of catchment areas was developed through three stages; catchment area grid extraction, boundary point extraction, and boundary line division. Also, an add-in catchment area delineation module, based on ArcGIS from ESRI, was developed in the consideration of productivity and utility of the program. Using the developed program, the catchment areas were delineated and they were compared to the catchment areas currently used by the government. The results showed that the catchment areas were delineated efficiently using the digital elevation data. Especially, in the regions with clear topographical slopes, they were delineated accurately and swiftly. Although in some regions with flat fields of paddles and downtowns or well-organized drainage facilities, the catchment areas were not segmented accurately, the program definitely reduce the processing time to delineate existing catchment areas. In the future, more efforts should be made to enhance current algorithm to facilitate the use of the higher precision of digital elevation data, and furthermore reducing the calculation time for processing large data volume.

Evaluation of Image Quality Based on Time of Flight in PET/CT (PET/CT에서 재구성 프로그램의 성능 평가)

  • Lim, Jung Jin;Yoon, Seok Hwan;Kim, Jong Pil;Nam Koong, Sik;Shin, Seong Hwa;Yoon, Sang Hyeok;Kim, Yeong Seok;Lee, Hyeong Jin;Lee, Hong Jae;Kim, Jin Eui;Woo, Jae Ryong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.110-114
    • /
    • 2012
  • Purpose : PET/CT is widely used for early checking up of cancer and following up of pre and post operation. Image reconstruction method is advanced with mechanical function. We want to evaluate image quality of each reconstruction program based on time of flight (TOF). Materials and Methods : After acquiring phantom images during 2 minutes with Gemini TF (Philips, USA), Biograph mCT (Siemens, USA) and Discovery 690 (GE, USA), we reconstructed image applied to Astonish TF (Philips, USA), ultraHD PET (Siemens, USA), Sharp IR (GE, USA) and not applied. inside of Flangeless Esser PET phantom (Data Spectrum corp., USA) was filled with $^{18}F$-FDG 1.11 kBq/ml (30 Ci/ml) and 4 hot inserts (8. 12. 16. 25 mm) were filled with 8.88 kBq/ml (240 ${\mu}Ci/ml$) the ratio of background activity and hot inserts activity was 1 : 8. Inside of triple line phantom (Data Spectrum corp., USA) was filled with $^{18}F$-FDG 37 MBq/ml (1 mCi). Three of lines were filled with 0.37 MBq (100 ${\mu}Ci$). Contrast ratio and background variability were acquired from reconstruction image used Flangeless Esser PET phantom and resolution was acquired from reconstruction image used triple line phantom. Results : The contrast ratio of image which was not applied to Astonish TF was 8.69, 12.28, 19.31, 25.80% in phantom lid of which size was 8, 12, 16, 25 mm and it which was applied to Astonish TF was 6.24, 13.24, 19.55, 27.60%. It which was not applied to ultraHD PET was 4.94, 12.68, 22.09, 30.14%, it which was applied to ultraHD PET was 4.76, 13.23, 23.72, 31.65%. It which was not applied to SharpIR was 13.18, 17.44, 28.76, 34.67%, it which was applied to SharpIR was 13.15, 18.32, 30.33, 35.73%. The background variability of image which was not applied to Astonish TF was 5.51, 5.42, 7.13, 6.28%. it which was applied to Astonish TF was 7.81, 7.94, 6.40 6.28%. It which was not applied to ultraHD PET was 6.46, 6.63, 5.33, 5.21%, it which was applied to ultraHD PET was 6.08, 6.08, 4.45, 4.58%. It which was not applied to SharpIR was 5.93, 4.82, 4.45, 5.09%, it which was applied to SharpIR was 4.80, 3.92, 3.63, 4.50%. The resolution of phantom line of which location was upper, center, right, which was not applied to Astonish TF was 10.77, 11.54, 9.34 mm it which was applied to Astonish TF was 9.54, 8.90, 8.88 mm. It which was not applied to ultraHD PET was 7.84, 6.95, 8.32 mm, it which was applied to ultraHD PET was 7.51, 6.66, 8.27 mm. It which was not applied to SharpIR was 9.35, 8.69, 8.99, it which was applied to SharpIR was 9.88, 9.18, 9.00 mm. Conclusion : Image quality was advanced generally while reconstruction program which is based on time of flight was used. Futhermore difference of result compared each manufacture reconstruction program showed up, however this is caused by specification of instrument of each manufacture and difference of reconstruction algorithm. Therefore we need further examination to find out appropriate reconstruction condition while using reconstruction program used for advance of image quality.

  • PDF

The effects of physical factors in SPECT (물리적 요소가 SPECT 영상에 미치는 영향)

  • 손혜경;김희중;나상균;이희경
    • Progress in Medical Physics
    • /
    • v.7 no.1
    • /
    • pp.65-77
    • /
    • 1996
  • Using the 2-D and 3-D Hoffman brain phantom, 3-D Jaszczak phantom and Single Photon Emission Computed Tomography, the effects of data acquisition parameter, attenuation, noise, scatter and reconstruction algorithm on image quantitation as well as image quality were studied. For the data acquisition parameters, the images were acquired by changing the increment angle of rotation and the radius. The less increment angle of rotation resulted in superior image quality. Smaller radius from the center of rotation gave better image quality, since the resolution degraded as increasing the distance from detector to object increased. Using the flood data in Jaszczak phantom, the optimal attenuation coefficients were derived as 0.12cm$\^$-1/ for all collimators. Consequently, the all images were corrected for attenuation using the derived attenuation coefficients. It showed concave line profile without attenuation correction and flat line profile with attenuation correction in flood data obtained with jaszczak phantom. And the attenuation correction improved both image qulity and image quantitation. To study the effects of noise, the images were acquired for 1min, 2min, 5min, 10min, and 20min. The 20min image showed much better noise characteristics than 1min image indicating that increasing the counting time reduces the noise characteristics which follow the Poisson distribution. The images were also acquired using dual-energy windows, one for main photopeak and another one for scatter peak. The images were then compared with and without scatter correction. Scatter correction improved image quality so that the cold sphere and bar pattern in Jaszczak phantom were clearly visualized. Scatter correction was also applied to 3-D Hoffman brain phantom and resulted in better image quality. In conclusion, the SPECT images were significantly affected by the factors of data acquisition parameter, attenuation, noise, scatter, and reconstruction algorithm and these factors must be optimized or corrected to obtain the useful SPECT data in clinical applications.

  • PDF

Development of A Dynamic Departure Time Choice Model based on Heterogeneous Transit Passengers (이질적 지하철승객 기반의 동적 출발시간선택모형 개발 (도심을 목적지로 하는 단일 지하철노선을 중심으로))

  • 김현명;임용택;신동호;백승걸
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.5
    • /
    • pp.119-134
    • /
    • 2001
  • This paper proposed a dynamic transit vehicle simulation model and a dynamic transit passengers simulation model, which can simultaneously simulate the transit vehicles and passengers traveling on a transit network, and also developed an algorithm of dynamic departure time choice model based on individual passenger. The proposed model assumes that each passenger's behavior is heterogeneous based on stochastic process by relaxing the assumption of homogeneity among passengers and travelers have imperfect information and bounded rationality to more actually represent and to simulate each passenger's behavior. The proposed model integrated a inference and preference reforming procedure into the learning and decision making process in order to describe and to analyze the departure time choices of transit passengers. To analyze and evaluate the model an example transit line heading for work place was used. Numerical results indicated that in the model based on heterogeneous passengers the travelers' preference influenced more seriously on the departure time choice behavior, while in the model based on homogeneous passengers it does not. The results based on homogeneous passengers seemed to be unrealistic in the view of rational behavior. These results imply that the aggregated travel demand models such as the traditional network assignment models based on user equilibrium, assuming perfect information on the network, homogeneity and rationality, might be different from the real dynamic travel demand patterns occurred on actual network.

  • PDF

The Effect of Mean Brightness and Contrast of Digital Image on Detection of Watermark Noise (워터 마크 잡음 탐지에 미치는 디지털 영상의 밝기와 대비의 효과)

  • Kham Keetaek;Moon Ho-Seok;Yoo Hun-Woo;Chung Chan-Sup
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.305-322
    • /
    • 2005
  • Watermarking is a widely employed method tn protecting copyright of a digital image, the owner's unique image is embedded into the original image. Strengthened level of watermark insertion would help enhance its resilience in the process of extraction even from various distortions of transformation on the image size or resolution. However, its level, at the same time, should be moderated enough not to reach human visibility. Finding a balance between these two is crucial in watermarking. For the algorithm for watermarking, the predefined strength of a watermark, computed from the physical difference between the original and embedded images, is applied to all images uniformal. The mean brightness or contrast of the surrounding images, other than the absolute brightness of an object, could affect human sensitivity for object detection. In the present study, we examined whether the detectability for watermark noise might be attired by image statistics: mean brightness and contrast of the image. As the first step to examine their effect, we made rune fundamental images with varied brightness and control of the original image. For each fundamental image, detectability for watermark noise was measured. The results showed that the strength ot watermark node for detection increased as tile brightness and contrast of the fundamental image were increased. We have fitted the data to a regression line which can be used to estimate the strength of watermark of a given image with a certain brightness and contrast. Although we need to take other required factors into consideration in directly applying this formula to actual watermarking algorithm, an adaptive watermarking algorithm could be built on this formula with image statistics, such as brightness and contrast.

  • PDF

A Study on Trade Area Analysis with the Use of Modified Probability Model (변형확률모델을 활용한 소매업의 상권분석 방안에 관한 연구)

  • Jin, Chang-Beom;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.15 no.6
    • /
    • pp.77-96
    • /
    • 2017
  • Purpose - This study aims to develop correspondence strategies to the environment change in domestic retail store types. Recently, new types of retails have emerged in retail industries. Therefore, trade area platform has developed focusing on the speed of data, no longer trade area from district border. Besides, 'trade area smart' brings about change in retail types with the development of giga internet. Thus, context shopping is changing the way of consumers' purchase pattern through data capture, technology capability, and algorithm development. For these reasons, the sales estimation model has been shown to be flawed using the notion of former scale and time, and it is necessary to construct a new model. Research design, data, and methodology - This study focuses on measuring retail change in large multi-shopping mall for the outlook for retail industry and competition for trade area with the theoretical background understanding of retail store types and overall domestic retail conditions. The competition among retail store types are strong, whereas the borders among them are fading. There is a greater need to analyze on a new model because sales expectation can be hard to get with business area competition. For comprehensive research, therefore, the research method based on the statistical analysis was excluded, and field survey and literature investigation method were used to identify problems and propose an alternative. In research material, research fidelity has improved with complementing research data related with retail specialists' as well as department stores. Results - This study analyzed trade area survival and its pattern through sales estimation and empirical studies on trade areas. The sales estimation, based on Huff model system, counts the number of households shopping absorption expectation from trade areas. Based on the results, this paper estimated sales scale, and then deducted modified probability model. Conclusions - In times of retail store chain destruction and off-line store reorganization, modified Huff model has problems in estimating sales. Transformation probability model, supplemented by the existing problems, was analyzed to be more effective in competitiveness business condition. This study offers a viable alternative to figure out related trade areas' sale estimation by reconstructing new-modified probability model. As a result, the future task is to enlarge the borders from IT infrastructure with data and evidence based business into DT infrastructure.

Research on rapid source term estimation in nuclear accident emergency decision for pressurized water reactor based on Bayesian network

  • Wu, Guohua;Tong, Jiejuan;Zhang, Liguo;Yuan, Diping;Xiao, Yiqing
    • Nuclear Engineering and Technology
    • /
    • v.53 no.8
    • /
    • pp.2534-2546
    • /
    • 2021
  • Nuclear emergency preparedness and response is an essential part to ensure the safety of nuclear power plant (NPP). Key support technologies of nuclear emergency decision-making usually consist of accident diagnosis, source term estimation, accident consequence assessment, and protective action recommendation. Source term estimation is almost the most difficult part among them. For example, bad communication, incomplete information, as well as complicated accident scenario make it hard to determine the reactor status and estimate the source term timely in the Fukushima accident. Subsequently, it leads to the hard decision on how to take appropriate emergency response actions. Hence, this paper aims to develop a method for rapid source term estimation to support nuclear emergency decision making in pressurized water reactor NPP. The method aims to make our knowledge on NPP provide better support nuclear emergency. Firstly, this paper studies how to build a Bayesian network model for the NPP based on professional knowledge and engineering knowledge. This paper presents a method transforming the PRA model (event trees and fault trees) into a corresponding Bayesian network model. To solve the problem that some physical phenomena which are modeled as pivotal events in level 2 PRA, cannot find sensors associated directly with their occurrence, a weighted assignment approach based on expert assessment is proposed in this paper. Secondly, the monitoring data of NPP are provided to the Bayesian network model, the real-time status of pivotal events and initiating events can be determined based on the junction tree algorithm. Thirdly, since PRA knowledge can link the accident sequences to the possible release categories, the proposed method is capable to find the most likely release category for the candidate accidents scenarios, namely the source term. The probabilities of possible accident sequences and the source term are calculated. Finally, the prototype software is checked against several sets of accident scenario data which are generated by the simulator of AP1000-NPP, including large loss of coolant accident, loss of main feedwater, main steam line break, and steam generator tube rupture. The results show that the proposed method for rapid source term estimation under nuclear emergency decision making is promising.