• Title/Summary/Keyword: image filter

Search Result 2,248, Processing Time 0.037 seconds

Evaluation of Ovary Dose of Childbearing age Woman with Breast cancer in Radiation therapy (가임기 여성의 방사선 치료 시 난소 선량 평가)

  • Park, Sung Jun;Lee, Yeong Cheol;Kim, Seon Myeong;Kim, Young Bum
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.145-153
    • /
    • 2021
  • Purpose: The purpose of this study is to evaluate the ovarian dose during radiation therapy for breast cancer in women of childbearing age through an experiment. The ovarian dose is evaluated by comparing and analyzing between the calculated dose in the treatment planning system according to the treatment technique and the measured dose using a thermoluminescence dosimeter (TLD). The clinical usefulness of lead (Pb) apron is investigated through dose analysis according to whether or not it is used. Materials and Methods: Rando humanoid phantom was used for measurement, and wedge filter radiation therapy, 3D conformal radiation therapy, and intensity modulated radiation therapy were used as treatment techniques. A treatment plan was established so that 95% of the prescribed dose could be delivered to the right breast of the Rando humanoid phantom 3D image obtained using the CT simulator. TLD was inserted into the surface and depth of the virtual ovary of the Rando hunmanoid phantom and irradiated with radiation. The measurement location was the center of treatment and the point moved 2 cm to the opposite breast from the center of the Rando hunmanoid phantom, 5cm, 10cm, 12.5cm, 15cm, 17.5cm, 20cm from the boundary of the right breast to the center of treatment and downward, and the surface and depth of the right ovary. Measurements were made at a total of 9 central points. In the dose comparison of treatment planning systems, two wedge filter treatment techniques, three-dimensional conformal radiotherapy, and intensity-modulated radiation therapy were established and compared. Treatments were compared, and dose measurements according to the use of lead apron were compared and analyzed in intensity-modulated radiation therapy. The measured value was calculated by averaging three TLD values for each point and converting using the TLD calibration value, which was calculated as the point dose mean value. In order to compare the treatment plan value with the actual measured value, the absolute dose value was measured and compared at each point (%Diff). Results: At Point A, the center of treatment, a maximum of 201.7cGy was obtained in the treatment planning system, and a maximum of 200.6cGy was obtained in the TLD. In all treatment planning systems, 0cGy was calculated from Point G, which is a point 17.5cm downward from the breast interface. As a result of TLD, a maximum of 2.6cGy was obtained at Point G, and a maximum of 0.9cGy was obtained at Point J, which is the ovarian dose, and the absolute dose was 0.3%~1.3%. The difference in dose according to the use of lead aprons was from a maximum of 2.1cGy to a minimum of 0.1cGy, and the %Diff value was 0.1%~1.1%. Conclusion: In the treatment planning system, the difference in dose according to the three treatment plans did not show a significant difference from 0.85% to 2.45%. In the ovary, the difference between the Rando humanoid phantom's treatment planning system and the actual measured dose was within 0.9%, and the actual measured dose was slightly higher. This did not accurately reflect the effect of scattered radiation in the treatment planning system, and it is thought that the dose of scattered radiation and the dose taken by CBCT with TLD inserted were reflected in the actual measurement. In dosimetry according to the with or without a lead apron, when a lead apron was used, the closer the distance from the treatment range, the more effective the shielding was. Although it is not clinically appropriate for pregnancy or artificial insemination during radiotherapy, the dose irradiated to the ovaries during treatment is not expected to significantly affect the reproductive function of women of childbearing age after radiotherapy. However, since women of childbearing age have constant anxiety, it is thought that psychological stability can be promoted by presenting the data from this study.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Evaluation of Ovary Dose for woman of Childbearing age Woman with Breast cancer in tomotherapy (가임기 여성의 유방암 토모치료 시 난소선량 평가비교)

  • Lee, Soo Hyeung;Park, Soo Yeun;Choi, Ji Min;Park, Ju Young;Kim, Jong Suk
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.337-343
    • /
    • 2014
  • Purpose : The aim of this study is to evaluate unwanted scattered dose to ovary by scattering and leakage generated from treatment fields of Tomotherapy for childbearing woman with breast cancer. Materials and Methods : The radiation treatments plans for left breast cancer were established using Tomotherapy planning system (Tomotherapy, Inc, USA). They were generated by using helical and direct Tomotherapy methods for comparison. The CT images for the planning were scanned with 2.5 mm slice thickness using anthropomorphic phantom (Alderson-Rando phantom, The Phantom Laboratory, USA). The measurement points for the ovary dose were determined at the points laterally 30 cm apart from mid-point of treatment field of the pelvis. The measurements were repeated five times and averaged using glass dosimeters (1.5 mm diameter and 12 mm of length) equipped with low-energy correction filter. The measures dose values were also converted to Organ Equivalent Dose (OED) by the linear exponential dose-response model. Results : Scattered doses of ovary which were measured based on two methods of Tomo helical and Tomo direct showed average of $64.94{\pm}0.84mGy$ and $37.64{\pm}1.20mGy$ in left ovary part and average of $64.38{\pm}1.85mGy$ and $32.96{\pm}1.11mGy$ in right ovary part. This showed when executing Tomotherapy, measured scattered dose of Tomo Helical method which has relatively greater monitor units (MUs) and longer irradiation time are approximately 1.8 times higher than Tomo direct method. Conclusion : Scattered dose of left and right ovary of childbearing women is lower than ICRP recommended does which is not seriously worried level against the infertility and secondary cancer occurrence. However, as breast cancer occurrence ages become younger in the future and radiation therapy using high-precision image guidance equipment like Tomotherapy is developed, clinical follow-up studies about the ovary dose of childbearing women patients would be more required.

Linearity Estimation of PET/CT Scanner in List Mode Acquisition (List Mode에서 PET/CT Scanner의 직선성 평가)

  • Choi, Hyun-Jun;Kim, Byung-Jin;Ito, Mikiko;Lee, Hong-Jae;Kim, Jin-Ui;Kim, Hyun-Joo;Lee, Jae-Sung;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.86-90
    • /
    • 2012
  • Purpose: Quantification of myocardial blood flow (MBF) using dynamic PET imaging has the potential to assess coronary artery disease. Rb-82 plays a key role in the clinical assessment of myocardial perfusion using PET. However, MBF could be overestimated due to the underestimation of left ventricular input function in the beginning of the acquisition when the scanner has non-linearity between count rate and activity concentration due to the scanner dead-time. Therefore, in this study, we evaluated the count rate linearity as a function of the activity concentration in PET data acquired in list mode. Materials & methods: A cylindrical phantom (diameter, 12 cm length, 10.5 cm) filled with 296 MBq F-18 solution and 800 mL of water was used to estimate the linearity of the Biograph 40 True Point PET/CT scanner. PET data was acquired with 10 min per frame of 1 bed duration in list mode for different activity concentration levels in 7 half-lives. The images were reconstructed by OSEM and FBP algorithms. Prompt, net true and random counts of PET data according to the activity concentration were measured. Total and background counts were measured by drawing ROI on the phantom images and linearity was measured using background correction. Results: The prompt count rates in list mode were linearly increased proportionally to the activity concentration. At a low activity concentration (<30 kBq/mL), the prompt net true and random count rates were increased with the activity concentration. At a high activity concentration (>30 kBq/mL), the increasing rate of the prompt net true rates was slightly decreased while the increasing rate of random counts was increased. There was no difference in the image intensity linearity between OSEM and FBP algorithms. Conclusion: The Biograph 40 True Point PET/CT scanner showed good linearity of count rate even at a high activity concentration (~370 kBq/mL).The result indicates that the scanner is useful for the quantitative analysis of data in heart dynamic studies using Rb-82, N-13, O-15 and F-18.

  • PDF

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.

Time-Lapse Crosswell Seismic Study to Evaluate the Underground Cavity Filling (지하공동 충전효과 평가를 위한 시차 공대공 탄성파 토모그래피 연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.1
    • /
    • pp.25-30
    • /
    • 1998
  • Time-lapse crosswell seismic data, recorded before and after the cavity filling, showed that the filling increased the velocity at a known cavity zone in an old mine site in Inchon area. The seismic response depicted on the tomogram and in conjunction with the geologic data from drillings imply that the size of the cavity may be either small or filled by debris. In this study, I attempted to evaluate the filling effect by analyzing velocity measured from the time-lapse tomograms. The data acquired by a downhole airgun and 24-channel hydrophone system revealed that there exists measurable amounts of source statics. I presented a methodology to estimate the source statics. The procedure for this method is: 1) examine the source firing-time for each source, and remove the effect of irregular firing time, and 2) estimate the residual statics caused by inaccurate source positioning. This proposed multi-step inversion may reduce high frequency numerical noise and enhance the resolution at the zone of interest. The multi-step inversion with different starting models successfully shows the subtle velocity changes at the small cavity zone. The inversion procedure is: 1) conduct an inversion using regular sized cells, and generate an image of gross velocity structure by applying a 2-D median filter on the resulting tomogram, and 2) construct the starting velocity model by modifying the final velocity model from the first phase. The model was modified so that the zone of interest consists of small-sized grids. The final velocity model developed from the baseline survey was as a starting velocity model on the monitor inversion. Since we expected a velocity change only in the cavity zone, in the monitor inversion, we can significantly reduce the number of model parameters by fixing the model out-side the cavity zone equal to the baseline model.

  • PDF

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.