• Title/Summary/Keyword: Dimensional accuracy

Search Result 2,613, Processing Time 0.041 seconds

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Impact of Bladder Volume on Acute Urinary Toxicity during Radiation Therapy for Prostate Cancer (전립선암의 방사선치료시 방광 부피가 비뇨기계 부작용에 미치는 영향)

  • Lee, Ji-Hae;Suh, Hyun-Suk;Lee, Kyung-Ja;Lee, Re-Na;Kim, Myung-Soo
    • Radiation Oncology Journal
    • /
    • v.26 no.4
    • /
    • pp.237-246
    • /
    • 2008
  • Purpose: Three-dimensional conformal radiation therapy (3DCRT) and intensity-modulated radiation therapy (IMRT) were found to reduce the incidence of acute and late rectal toxicity compared with conventional radiation therapy (RT), although acute and late urinary toxicities were not reduced significantly. Acute urinary toxicity, even at a low-grade, not only has an impact on a patient's quality of life, but also can be used as a predictor for chronic urinary toxicity. With bladder filling, part of the bladder moves away from the radiation field, resulting in a small irradiated bladder volume; hence, urinary toxicity can be decreased. The purpose of this study is to evaluate the impact of bladder volume on acute urinary toxicity during RT in patients with prostate cancer. Materials and Methods: Forty two patients diagnosed with prostate cancer were treated by 3DCRT and of these, 21 patients made up a control group treated without any instruction to control the bladder volume. The remaining 21 patients in the experimental group were treated with a full bladder after drinking 450 mL of water an hour before treatment. We measured the bladder volume by CT and ultrasound at simulation to validate the accuracy of ultrasound. During the treatment period, we measured bladder volume weekly by ultrasound, for the experimental group, to evaluate the variation of the bladder volume. Results: A significant correlation between the bladder volume measured by CT and ultrasound was observed. The bladder volume in the experimental group varied with each patient despite drinking the same amount of water. Although weekly variations of the bladder volume were very high, larger initial CT volumes were associated with larger mean weekly bladder volumes. The mean bladder volume was $299{\pm}155\;mL$ in the experimental group, as opposed to $187{\pm}155\;mL$ in the control group. Patients in experimental group experienced less acute urinary toxicities than in control group, but the difference was not statistically significant. A trend of reduced toxicity was observed with the increase of CT bladder volume. In patients with bladder volumes greater than 150 mL at simulation, toxicity rates of all grades were significantly lower than in patients with bladder volume less than 150 mL. Also, patients with a mean bladder volume larger than 100 mL during treatment showed a slightly reduced Grade 1 urinary toxicity rate compared to patients with a mean bladder volume smaller than 100 mL. Conclusion: Despite the large variability in bladder volume during the treatment period, treating patients with a full bladder reduced acute urinary toxicities in patients with prostate cancer. We recommend that patients with prostate cancer undergo treatment with a full bladder.

Usefulness of Gated RapidArc Radiation Therapy Patient evaluation and applied with the Amplitude mode (호흡 동조 체적 세기조절 회전 방사선치료의 유용성 평가와 진폭모드를 이용한 환자적용)

  • Kim, Sung Ki;Lim, Hhyun Sil;Kim, Wan Sun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.29-35
    • /
    • 2014
  • Purpose : This study has already started commercial Gated RapidArc automation equipment which was not previously in the Gated radiation therapy can be performed simultaneously with the VMAT Gated RapidArc radiation therapy to the accuracy of the analysis to evaluate the usability, Amplitude mode applied to the patient. Materials and Methods : The analysis of the distribution of radiation dose equivalent quality solid water phantom and GafChromic film was used Film QA film analysis program using the Gamma factor (3%, 3 mm). Three-dimensional dose distribution in order to check the accuracy of Matrixx dosimetry equipment and Compass was used for dose analysis program. Periodic breathing synchronized with solid phantom signals Phantom 4D Phantom and Varian RPM was created by breathing synchronized system, free breathing and breath holding at each of the dose distribution was analyzed. In order to apply to four patients from February 2013 to August 2013 with liver cancer targets enough to get a picture of 4DCT respiratory cycle and then patients are pratice to meet patient's breathing cycle phase mode using the patient eye goggles to see the pattern of the respiratory cycle to be able to follow exactly in a while 4DCT images were acquired. Gated RapidArc treatment Amplitude mode in order to create the breathing cycle breathing performed three times, and then at intervals of 40% to 60% 5-6 seconds and breathing exercises that can not stand (Fig. 5), 40% While they are treated 60% in the interval Beam On hold your breath when you press the button in a way that was treated with semi-automatic. Results : Non-respiratory and respiratory rotational intensity modulated radiation therapy technique absolute calculation dose of using computerized treatment plan were shown a difference of less than 1%, the difference between treatment technique was also less than 1%. Gamma (3%, 3 mm) and showed 99% agreement, each organ-specific dose difference were generally greater than 95% agreement. The rotational intensity modulated radiation therapy, respiratory synchronized to the respiratory cycle created Amplitude mode and the actual patient's breathing cycle could be seen that a good agreement. Conclusion : When you are treated Non-respiratory and respiratory method between volumetric intensity modulated radiation therapy rotation of the absolute dose and dose distribution showed a very good agreement. This breathing technique tuning volumetric intensity modulated radiation therapy using a rotary moving along the thoracic or abdominal breathing can be applied to the treatment of tumors is considered. The actual treatment of patients through the goggles of the respiratory cycle to create Amplitude mode Gated RapidArc treatment equipment that does not automatically apply to the results about 5-6 seconds stopped breathing in breathing synchronized rotary volumetric intensity modulated radiation therapy facilitate could see complement.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

A Study For Optimizing Input Waveforms In Radiofrequency Liver Tumor Ablation Using Finite Element Analysis (유한 요소 해석을 이용한 고주파 간 종양 절제술의 입력 파형 최적화를 위한 연구)

  • Lim, Do-Hyung;NamGung, Bum-Seok;Lee, Tae-Woo;Choi, Jin-Seung;Tack, Gye-Rae;Kim, Han-Sung
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.2
    • /
    • pp.235-243
    • /
    • 2007
  • Hepatocellular carcinoma is significant worldwide public health problem with an estimated annually mortality of 1,000,000 people. Radiofrequency (RF) ablation is an interventional technique that in recent years has come to be used for treatment of the hepatocellualr carcinoma, by destructing tumor tissues in high temperatures. Numerous studies have been attempted to prove excellence of RF ablation and to improve its efficiency by various methods. However, the attempts are sometimes paradox to advantages of a minimum invasive characteristic and an operative simplicity in RF ablation. The aim of the current study is, therefore, to suggest an improved RF ablation technique by identifying an optimum RF pattern, which is one of important factors capable of controlling the extent of high temperature region in lossless of the advantages of RF ablation. Three-dimensional finite element (FE) model was developed and validated comparing with the results reported by literature. Four representative Rf patterns (sine, square, exponential, and simulated RF waves), which were corresponding to currents fed during simulated RF ablation, were investigated. Following parameters for each RF pattern were analyzed to identify which is the most optimum in eliminating effectively tumor tissues. 1) maximum temperature, 2) a degree of alteration of maximum temperature in a constant time range (30-40 second), 3) a domain of temperature over $47^{\circ}C$ isothermal temperature (IT), and 4) a domain inducing over 63% cell damage. Here, heat transfer characteristics within the tissues were determined by Bioheat Governing Equation. Developed FE model showed 90-95% accuracy approximately in prediction of maximum temperature and domain of interests achieved during RF ablation. Maximum temperatures for sine, square, exponential, and simulated RF waves were $69.0^{\circ}C,\;66.9^{\circ}C,\;65.4^{\circ}C,\;and\;51.8^{\circ}C$, respectively. While the maximum temperatures were decreased in the constant time range, average time intervals for sine, square, exponential, and simulated RE waves were $0.49{\pm}0.14,\;1.00{\pm}0.00,\;1.65{\pm}0.02,\;and\;1.66{\pm}0.02$ seconds, respectively. Average magnitudes of the decreased maximum temperatures in the time range were $0.45{\pm}0.15^{\circ}C$ for sine wave, $1.93{\pm}0.02^{\circ}C$ for square wave, $2.94{\pm}0.05^{\circ}C$ for exponential wave, and $1.53{\pm}0.06^{\circ}C$ for simulated RF wave. Volumes of temperature domain over $47^{\circ}C$ IT for sine, square, exponential, and simulated RF waves were 1480mm3, 1440mm3, 1380mm3, and 395mm3, respectively. Volumes inducing over 63% cell damage for sine, square, exponential, and simulated RF waves were 114mm3, 62mm3, 17mm3, and 0mm3, respectively. These results support that applying sine wave during RF ablation may be generally the most optimum in destructing effectively tumor tissues, compared with other RF patterns.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Upper Body Surface Change Analysis using 3-D Body Scanner (3차원 인체 측정기를 이용한 체표변화 분석)

  • Lee Jeongran;Ashdoon Susan P.
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.29 no.12 s.148
    • /
    • pp.1595-1607
    • /
    • 2005
  • Three-dimensional(3-D) body scanners used to capture anthropometric measurements are now becoming a common research tool far apparel. This study had two goals, to test the accuracy and reliability of 3-D measurements of dynamic postures, and !o analyze the change in upper body surface measurements between the standard anthropometric position and various dynamic positions. A comparison of body surface measurements using two different measuring methods, 3-D scan measurements using virtual tools on the computer screen and traditional manual measurements for a standard anthropometric posture and for a posture with shoulder flexion were $-2\~20mm$. Girth items showed some disagreement of values between the two methods. None of the measurements were significantly different except f3r the neckbase girth for any of the measuring methods or postures. Scan measurements of the upper body items showed significant linear surface change in the dynamic postures. Shoulder length, interscye front and back, and biacromion length were the items most affected in the dynamic postures. Changes of linear body surface were very similar for the two measuring methods within the same posture. The repeatability of data taken from the 3-D scans using virtual tools showed satisfactory results. Three times repeated scan measurements f3r the scapula protraction and scapula elevation posture were proven to be statistically the same for all measurement items. Measurements from automatic measuring software that measured the 3-D scan with no manual intervention were compared with the measurements using virtual tools. Many measurements from the automatic program were larger and showed quite different values.

A Comparative Analysis between Photogrammetric and Auto Tracking Total Station Techniques for Determining UAV Positions (무인항공기의 위치 결정을 위한 사진 측량 기법과 오토 트래킹 토탈스테이션 기법의 비교 분석)

  • Kim, Won Jin;Kim, Chang Jae;Cho, Yeon Ju;Kim, Ji Sun;Kim, Hee Jeong;Lee, Dong Hoon;Lee, On Yu;Meng, Ju Pil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.553-562
    • /
    • 2017
  • GPS (Global Positioning System) receiver among various sensors mounted on UAV (Unmanned Aerial Vehicle) helps to perform various functions such as hovering flight and waypoint flight based on GPS signals. GPS receiver can be used in an environment where GPS signals are smoothly received. However, recently, the use of UAV has been diversifying into various fields such as facility monitoring, delivery service and leisure as UAV's application field has been expended. For this reason, GPS signals may be interrupted by UAV's flight in a shadow area where the GPS signal is limited. Multipath can also include various noises in the signal, while flying in dense areas such as high-rise buildings. In this study, we used analytical photogrammetry and auto tracking total station technique for 3D positioning of UAV. The analytical photogrammetry is based on the bundle adjustment using the collinearity equations, which is the geometric principle of the center projection. The auto tracking total station technique is based on the principle of tracking the 360 degree prism target in units of seconds or less. In both techniques, the target used for positioning the UAV is mounted on top of the UAV and there is a geometric separation in the x, y and z directions between the targets. Data were acquired at different speeds of 0.86m/s, 1.5m/s and 2.4m/s to verify the flight speed of the UAV. Accuracy was evaluated by geometric separation of the target. As a result, there was an error from 1mm to 12.9cm in the x and y directions of the UAV flight. In the z direction with relatively small movement, approximately 7cm error occurred regardless of the flight speed.

Performance Evaluation of Machine Learning and Deep Learning Algorithms in Crop Classification: Impact of Hyper-parameters and Training Sample Size (작물분류에서 기계학습 및 딥러닝 알고리즘의 분류 성능 평가: 하이퍼파라미터와 훈련자료 크기의 영향 분석)

  • Kim, Yeseul;Kwak, Geun-Ho;Lee, Kyung-Do;Na, Sang-Il;Park, Chan-Won;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.811-827
    • /
    • 2018
  • The purpose of this study is to compare machine learning algorithm and deep learning algorithm in crop classification using multi-temporal remote sensing data. For this, impacts of machine learning and deep learning algorithms on (a) hyper-parameter and (2) training sample size were compared and analyzed for Haenam-gun, Korea and Illinois State, USA. In the comparison experiment, support vector machine (SVM) was applied as machine learning algorithm and convolutional neural network (CNN) was applied as deep learning algorithm. In particular, 2D-CNN considering 2-dimensional spatial information and 3D-CNN with extended time dimension from 2D-CNN were applied as CNN. As a result of the experiment, it was found that the hyper-parameter values of CNN, considering various hyper-parameter, defined in the two study areas were similar compared with SVM. Based on this result, although it takes much time to optimize the model in CNN, it is considered that it is possible to apply transfer learning that can extend optimized CNN model to other regions. Then, in the experiment results with various training sample size, the impact of that on CNN was larger than SVM. In particular, this impact was exaggerated in Illinois State with heterogeneous spatial patterns. In addition, the lowest classification performance of 3D-CNN was presented in Illinois State, which is considered to be due to over-fitting as complexity of the model. That is, the classification performance was relatively degraded due to heterogeneous patterns and noise effect of input data, although the training accuracy of 3D-CNN model was high. This result simply that a proper classification algorithms should be selected considering spatial characteristics of study areas. Also, a large amount of training samples is necessary to guarantee higher classification performance in CNN, particularly in 3D-CNN.

Development of an anisotropic spatial interpolation method for velocity in meandering river channel (비등방성을 고려한 사행하천의 유속 공간보간기법 개발)

  • You, Hojun;Kim, Dongsu
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.7
    • /
    • pp.455-465
    • /
    • 2017
  • Understanding of the two-dimensional velocity field is crucial in terms of analyzing various hydrodynamic and fluvial processes in the riverine environments. Until recently, many numerical models have played major roles of providing such velocity field instead of in-situ flow measurements, because there were limitations in instruments and methodologies suitable for efficiently measuring in the broad range of river reaches. In the last decades, however, the advent of modernized instrumentations started to revolutionize the flow measurements. Among others, acoustic Doppler current profilers (ADCPs) became very promising especially for accurately assessing streamflow discharge, and they are also able to provide the detailed velocity field very efficiently. Thus it became possible to capture the velocity field only with field observations. Since most of ADCPs measurements have been mostly conducted in the cross-sectional lines despite their capabilities, it is still required to apply appropriate interpolation methods to obtain dense velocity field as likely as results from numerical simulations. However, anisotropic nature of the meandering river channel could have brought in the difficulties for applying simple spatial interpolation methods for handling dynamic flow velocity vector, since the flow direction continuously changes over the curvature of the channel shape. Without considering anisotropic characteristics in terms of the meandering, therefore, conventional interpolation methods such as IDW and Kriging possibly lead to erroneous results, when they dealt with velocity vectors in the meandering channel. Based on the consecutive ADCP cross-sectional measurements in the meandering river channel. For this purpose, the geographic coordinate with the measured ADCP velocity was converted from the conventional Cartesian coordinate (x, y) to a curvilinear coordinate (s, n). The results from application of A-VIM showed significant improvement in accuracy as much as 41.5% in RMSE.