• Title/Summary/Keyword: Converting machine

Search Result 93, Processing Time 0.02 seconds

Classification and discrimination of excel radial charts using the statistical shape analysis (통계적 형상분석을 이용한 엑셀 방사형 차트의 분류와 판별)

  • Seungeon Lee;Jun Hong Kim;Yeonseok Choi;Yong-Seok Choi
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.73-86
    • /
    • 2024
  • A radial chart of Excel is very useful graphical method in delivering information for numerical data. However, it is not easy to discriminate or classify many individuals. In this case, after shaping each individual of a radial chart, we need to apply shape analysis. For a radial chart, since landmarks for shaping are formed as many as the number of variables representing the characteristics of the object, we consider a shape that connects them to a line. If the shape becomes complicated due to the large number of variables, it is difficult to easily grasp even if visualized using a radial chart. Principal component analysis (PCA) is performed on variables to create a visually effective shape. The classification table and classification rate are checked by applying the techniques of traditional discriminant analysis, support vector machine (SVM), and artificial neural network (ANN), before and after principal component analysis. In addition, the difference in discrimination between the two coordinates of generalized procrustes analysis (GPA) coordinates and Bookstein coordinates is compared. Bookstein coordinates are obtained by converting the position, rotation, and scale of the shape around the base landmarks, and show higher rate than GPA coordinates for the classification rate.

The Accuracy Evaluation according to Dose Delivery Interruption and Restart for Volumetric Modulated Arc Therapy (용적변조회전 방사선치료에서 선량전달의 중단 및 재시작에 따른 정확성 평가)

  • Lee, Dong Hyung;Bae, Sun Myung;Kwak, Jung Won;Kang, Tae Young;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.77-85
    • /
    • 2013
  • Purpose: The accurate movement of gantry rotation, collimator and correct application of dose rate are very important to approach the successful performance of Volumetric Modulated Arc Therapy (VMAT), because it is tightly interlocked with a complex treatment plan. The interruption and restart of dose delivery, however, are able to occur on treatment by various factors of a treatment machine and treatment plan. If unexpected problems of a treat machine or a patient interrupt the VMAT, the movement of treatment machine for delivering the remaining dose will be restarted at the start point. In this investigation, We would like to know the effect of interruptions and restart regarding dose delivery at VMAT. Materials and Methods: Treatment plans of 10 patients who had been treated at our center were used to measure and compare the dose distribution of each VMAT after converting to a form of digital image and communications in Medicine (DICOM) with treatment planning system (Eclipse V 10.0, Varian, USA). We selected the 6 MV photon energy of Trilogy (Varian, USA) and used OmniPro I'mRT system (V 1.7b, IBA dosimetry, Germany) to analyze the data that were acquired through this measurement with two types of interruptions four times for each case. The door interlock and the beam-off were used to stop and then to restart the dose delivery of VMAT. The gamma index in OmniPro I'mRT system and T-test in Microsoft Excel 2007 were used to evaluate the result of this investigation. Results: The deviations of average gamma index in cases with door interlock, beam-off and without interruption on VMAT are 0.141, 0.128 and 0.1. The standard deviations of acquired gamma values are 0.099, 0.091, 0.071 and The maximum gamma value in each case is 0.413, 0.379, 0.286, respectively. This analysis has a 95-percent confidence level and the P-value of T-test is under 0.05. Gamma pass rate (3%, 3 mm) is acceptable in all of measurements. Conclusion: As a result, We could make sure that the interruption of this investgation are not enough to seriously affect dose delivery of VMAT by analyzing the measured data. But this investigation did not reflect all cases about interruptions and errors regarding the movement of a gantry rotation, collimator and patient So, We should continuously maintain a treatment machine and program to deliver the accurate dose when we perform the VMAT for the many kinds of cancer patients.

  • PDF

Construction of Artificial Intelligence Training Platform for Multi-Center Clinical Research (다기관 임상연구를 위한 인공지능 학습 플랫폼 구축)

  • Lee, Chung-Sub;Kim, Ji-Eon;No, Si-Hyeong;Kim, Tae-Hoon;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.239-246
    • /
    • 2020
  • In the medical field where artificial intelligence technology is introduced, research related to clinical decision support system(CDSS) in relation to diagnosis and prediction is actively being conducted. In particular, medical imaging-based disease diagnosis area applied AI technologies at various products. However, medical imaging data consists of inconsistent data, and it is a reality that it takes considerable time to prepare and use it for research. This paper describes a one-stop AI learning platform for converting to medical image standard R_CDM(Radiology Common Data Model) and supporting AI algorithm development research based on the dataset. To this, the focus is on linking with the existing CDM(common data model) and model the system, including the schema of the medical imaging standard model and report information for multi-center research based on DICOM(Digital Imaging and Communications in Medicine) tag information. And also, we show the execution results based on generated datasets through the AI learning platform. As a proposed platform, it is expected to be used for various image-based artificial intelligence researches.

Work Improvement by Computerizing the Process of Shielding Block Production (차폐블록 제작과정의 전산화를 통한 업무개선)

  • Kang, Dong Hyuk;Jeong, Do Hyeong;Kang, Dong Yoon;Jeon, Young Gung;Hwang, Jae Woong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.87-90
    • /
    • 2013
  • Purpose: Introducing CR (Computed Radiography) system created a process of printing therapy irradiation images and converting the degree of enlargement. This is to increase job efficiency and contribute to work improvement using a computerized method with home grown software to simplify this process, work efficiency. Materials and Methods: Microsoft EXCEL (ver. 2007) and VISUAL BASIC (ver. 6.0) have been used to make the software. A window for each shield block was designed to enter patients' treatment information. Distances on the digital images were measured, the measured data were entered to the Excel program to calculate the degree of enlargement, and printouts were produced to manufacture shield blocks. Results: By computerizing the existing method with this program, the degree of enlargement can easily be calculated and patients' treatment information can be entered into the printouts by using macro function. As a result, errors in calculation which may occur during the process of production or errors that the treatment information may be delivered wrongly can be reduced. In addition, with the simplification of the conversion process of the degree of enlargement, no copy machine was needed, which resulted in the reduction of use of paper. Conclusion: Works have been improved by computerizing the process of block production and applying it to practice which would simplify the existing method. This software can apply to and improve the actual conditions of each hospital in various ways using various features of EXCEL and VISUAL BASIC which has already been proven and used widely.

  • PDF

Evaluation of Corrected Dose with Inhomogeneous Tissue by using CT Image (CT 영상을 이용한 불균질 조직의 선량보정 평가)

  • Kim, Gha-Jung
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.18 no.2
    • /
    • pp.75-80
    • /
    • 2006
  • Purpose: In radiation therapy, precise calculation of dose toward malignant tumors or normal tissue would be a critical factor in determining whether the treatment would be successful. The Radiation Treatment Planning (RTP) system is one of most effective methods to make it effective to the correction of dose due to CT number through converting linear attenuation coefficient to density of the inhomogeneous tissue by means of CT based reconstruction. Materials and Methods: In this study, we carried out the measurement of CT number and calculation of mass density by using RTP system and the homemade inhomogeneous tissue Phantom and the values were obtained with reference to water. Moreover, we intended to investigate the effectiveness and accuracy for the correction of inhomogeneous tissue by the CT number through comparing the measured dose (nC) and calculated dose (Percentage Depth Dose, PDD) used CT image during radiation exposure with RTP. Results: The difference in mass density between the calculated tissue equivalent material and the true value was ranged from $0.005g/cm^3\;to\;0.069g/cm^3$. A relative error between PDD of RTP and calculated dose obtained by radiation therapy of machine ranged from -2.8 to +1.06%(effective range within 3%). Conclusion: In conclusion, we confirmed the effectiveness of correction for the inhomogeneous tissues through CT images. These results would be one of good information on the basic outline of Quality Assurance (QA) in RTP system.

  • PDF

A Study on the Measurements of Sub-surface Residual Stress in the Field of Linear Stress Gradient (선형구배 응력장에서 표층의 잔류응력 측정에 관한 연구)

  • 최병길;전상윤;이택순
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.9
    • /
    • pp.1632-1642
    • /
    • 1992
  • When a blind hole of small diameter is drilled in the field of residual stress, strain relieved around the hole is function of magnitude of stress, patterns of stress distribution and hole geometry of diameter and depth. Relieved strain coefficients can be calculated from FEM analysis of relieved strain and actual stress. These relieved strain coefficients make it possible to measure residual stress which vary along the depth in the subsurface of stressed material. In this study, the calibration tests of residual stress measurement are carried out by drilling a hole incrementally on the cantilever or on the tensile test bar. Residual stresses can be determined from measured strains around a shallow hole by application of power series method. For the sake of reliable measurement of residual stress, much efforts should be done to measure relieved strains and hole depth more accurately comparing with conventional procedures of gage subject to the external load. Otherwise linear equations converting strains into stresses may yield erratic residual stresses because of ill-conditions of linear equations. With accurate measurements of relieved strains, residual stress even if varying along the depth can be measured. It is also possible to measure residual stress in the thin film of material by drilling a shallow hole.

Development of Artificial Neural Network Model for Estimation of Cable Tension of Cable-Stayed Bridge (사장교 케이블의 장력 추정을 위한 인공신경망 모델 개발)

  • Kim, Ki-Jung;Park, Yoo-Sin;Park, Sung-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.414-419
    • /
    • 2020
  • An artificial intelligence-based cable tension estimation model was developed to expand the utilization of data obtained from cable accelerometers of cable-stayed bridges. The model was based on an algorithm for selecting the natural frequency in the tension estimation process based on the vibration method and an applied artificial neural network (ANN). The training data of the ANN was composed after converting the cable acceleration data into the frequency, and machine learning was carried out using the characteristics with a pattern on the natural frequency. When developing the training data, the frequencies with various amplitudes can be used to represent the frequencies of multiple shapes to improve the selection performance for natural frequencies. The performance of the model was estimated by comparing it with the control criteria of the tension estimated by an expert. As a result of the verification using 139 frequencies obtained from the cable accelerometer as the input, the natural frequency was determined to be similar to the real criteria and the estimated tension of the cable by the natural frequency was 96.4% of the criteria.

Arrhythmia Classification based on Binary Coding using QRS Feature Variability (QRS 특징점 변화에 따른 바이너리 코딩 기반의 부정맥 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1947-1954
    • /
    • 2013
  • Previous works for detecting arrhythmia have mostly used nonlinear method such as artificial neural network, fuzzy theory, support vector machine to increase classification accuracy. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. But it is difficult to detect the P and T wave signal because of person's individual difference. Therefore it is necessary to design efficient algorithm that classifies different arrhythmia in realtime and decreases computational cost by extrating minimal feature. In this paper, we propose arrhythmia detection based on binary coding using QRS feature varibility. For this purpose, we detected R wave, RR interval, QRS width from noise-free ECG signal through the preprocessing method. Also, we classified arrhythmia in realtime by converting threshold variability of feature to binary code. PVC, PAC, Normal, BBB, Paced beat classification is evaluated by using 39 record of MIT-BIH arrhythmia database. The achieved scores indicate the average of 97.18%, 94.14%, 99.83%, 92.77%, 97.48% in PVC, PAC, Normal, BBB, Paced beat classification.

Effects of particle size and loading rate on the tensile failure of asphalt specimens based on a direct tensile test and particle flow code simulation

  • Q. Wang;D.C. Wang;J.W. Fu;Vahab Sarfarazi;Hadi Haeri;C.L. Guo;L.J. Sun;Mohammad Fatehi Marji
    • Structural Engineering and Mechanics
    • /
    • v.86 no.5
    • /
    • pp.607-619
    • /
    • 2023
  • This study, it was tried to evaluate the asphalt behavior under tensile loading conditions through indirect Brazilian and direct tensile tests, experimentally and numerically. This paper is important from two points of view. The first one, a new test method was developed for the determination of the direct tensile strength of asphalt and its difference was obtained from the indirect test method. The second one, the effects of particle size and loading rate have been cleared on the tensile fracture mechanism. The experimental direct tensile strength of the asphalt specimens was measured in the laboratory using the compression-to-tensile load converting (CTLC) device. Some special types of asphalt specimens were prepared in the form of slabs with a central hole. The CTLC device is then equipped with this specimen and placed in the universal testing machine. Then, the direct tensile strength of asphalt specimens with different sizes of ingredients can be measured at different loading rates in the laboratory. The particle flow code (PFC) was used to numerically simulate the direct tensile strength test of asphalt samples. This numerical modeling technique is based on the versatile discrete element method (DEM). Three different particle diameters were chosen and were tested under three different loading rates. The results show that when the loading rate was 0.016 mm/sec, two tensile cracks were initiated from the left and right of the hole and propagated perpendicular to the loading axis till coalescence to the model boundary. When the loading rate was 0.032 mm/sec, two tensile cracks were initiated from the left and right of the hole and propagated perpendicular to the loading axis. The branching occurs in these cracks. This shows that the crack propagation is under quasi-static conditions. When the loading rate was 0.064 mm/sec, mixed tensile and shear cracks were initiated below the loading walls and branching occurred in these cracks. This shows that the crack propagation is under dynamic conditions. The loading rate increases and the tensile strength increases. Because all defects mobilized under a low loading rate and this led to decreasing the tensile strength. The experimental results for the direct tensile strengths of asphalt specimens of different ingredients were in good accordance with their corresponding results approximated by DEM software.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.