• Title/Summary/Keyword: Binary Method

Search Result 1,978, Processing Time 0.029 seconds

Quality Assurance of Volumetric Modulated Arc Therapy Using the Dynalog Files (다이나로그 파일을 이용한 부피세기조절회전치료의 정도관리)

  • Kang, Dong-Jin;Jung, Jae-Yong;Shin, Young-Joo;Min, Jung-Whan;Kim, Yon-Lae;Yang, Hyung-jin
    • Journal of radiological science and technology
    • /
    • v.39 no.4
    • /
    • pp.577-585
    • /
    • 2016
  • The purpose of this study is to evaluate the accuracy of beam delivery QA software using the MLC dynalog file, about the VMAT plan with AAPM TG-119 protocol. The Clinac iX with a built-in 120 MLC was used to acquire the MLC dynalog file be imported in MobiusFx(MFX). To establish VMAT plan, Oncentra RTP system was used target and organ structures were contoured in Im'RT phantom. For evaluation of dose distribution was evaluated by using gamma index, and the point dose was evaluated by using the CC13 ion chamber in Im'RT phantom. For the evaluation of point dose, the mean of relative error between measured and calculated value was $1.41{\pm}0.92%$(Target) and $0.89{\pm}0.86%$(OAR), the confidence limit were 3.21(96.79%, Target) and 2.58(97.42%, OAR). For the evaluation of dose distribution, in case of $Delta^{4PT}$, the average percentage of passing rate were $99.78{\pm}0.2%$(3%/3 mm), $96.86{\pm}1.76%$(2%/2 mm). In case of MFX, the average percentage of passing rate were $99.90{\pm}0.14%$(3%/3 mm), $97.98{\pm}1.97%$(2%/2 mm), the confidence limits(CL) were in case of $Delta^{4PT}$ 0.62(99.38%, 3%/3 mm), 6.6(93.4%, 2%/2 mm), in case of MFX, 0.38(99.62%, 3%/3 mm), 5.88(94.12%, 2%/2 mm). In this study, we performed VMAT QA method using dynamic MLC log file compare to binary diode array chamber. All analyzed results were satisfied with acceptance criteria based on TG-119 protocol.

A Study of Experimental Image Direction for Short Animation Movies -focusing in short film and (단편애니메이션의 실험적 영상연출 연구 -<탱고>와 <페스트 필름>을 중심으로)

  • Choi, Don-Ill
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.375-391
    • /
    • 2014
  • Animation movie is a non-photorealistic animated art that consists of formative language forming a frame based on a story and cuts describing frames that form the cuts. Therefore, in expressing an image, artistic expression methods and devices for a formative space are should be provided in a frame while cuts have the images between frames faithfully. Short animation movie is produced by various image experiments with unique image expressions rather than narration for expressing subjective discourse of a writer. Therefore, image style that forms unique images and various image directions are important factors. This study compared the experimental image directions of and , both of which showed a production method of film manipulation. First, while uses pixilation that produces images obtained from live images through painting and many optical disclosure process on a cell mat, was made with diverse collage techniques such as tearing, cutting, pasting, and folding hundreds of scenes from action movies. Second, expresses non-causal relationship of characters by their repetitive behaviors and circulatory image structure through a fixed camera angle, resisting typical scene transition. On the other hand, has an advancing structure that progresses antagonistic relationship of characters through diverse camera angles and scene transition of unique images. Third, in terms of editing, uses a long-take short cut technique in which the whole image consists of one short cut, though it seems to be many scenes with the appearance of various characters. On the other hand, maximizes visual fun and commitment by image reconstruction with hundreds of various short cuts. That is, both works have common features of an experimental work that shows expansion of animated image expressions through film manipulation that is different form general animation productions. On top of that, delivers routine life of diverse human beings without clear narration through image of conceptualized spaces. expresses it in a new image space through image reconstruction with collage technique and speedy progress, setting a binary opposition structure.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

Development of Marker-free Transgenic Rice for Increasing Bread-making Quality using Wheat High Molecular Weight Glutenin Subunits (HMW-GS) Gene (밀 고분자 글루테닌 유전자를 이용하여 빵 가공적성 증진을 위한 마커 프리 형질전환 벼의 개발)

  • Park, Soo-Kwon;Shin, DongJin;Hwang, Woon-Ha;Oh, Se-Yun;Cho, Jun-Hyun;Han, Sang-Ik;Nam, Min-Hee;Park, Dong-Soo
    • Journal of Life Science
    • /
    • v.23 no.11
    • /
    • pp.1317-1324
    • /
    • 2013
  • High-molecular weight glutenin subunits (HMW-GS) have been shown to play a crucial role in determining the processing properties of the wheat grain. We have produced marker-free transgenic rice plants containing a wheat Glu-1Bx7 gene encoding the HMG-GS from the Korean wheat cultivar 'Jokyeong' using the Agrobacterium-mediated co-transformation method. The Glu-1Bx7-own promoter was inserted into a binary vector for seed-specific expression of the Glu-1Bx7 gene. Two expression cassettes comprised of separate DNA fragments containing only Glu-1Bx7 and hygromycin phosphotransferase II (HPTII) resistance genes were introduced separately to the Agrobacterium tumefaciens EHA105 strain for co-infection. Each EHA105 strain harboring Glu-1Bx7 or HPTII was infected to rice calli at a 3:1 ratio of Glu-1Bx7 and HPTII, respectively. Then, among 216 hygromycin-resistant $T_0$ plants, we obtained 24 transgenic lines with both Glu-1Bx7 and HPTII genes inserted into the rice genome. We reconfirmed integration of the Glu-1Bx7 gene into the rice genome by Southern blot analysis. Transcripts and proteins of the wheat Glu-1Bx7 were stably expressed in the rice $T_1$ seeds. Finally, the marker-free plants harboring only the Glu-1Bx7 gene were successfully screened at the $T_1$ generation.

A Study on the Change of Image Quality According to the Change of Tube Voltage in Computed Tomography Pediatric Chest Examination (전산화단층촬영 소아 흉부검사에서 관전압의 변화에 따른 화질변화에 관한 연구)

  • Kim, Gu;Kim, Gyeong Rip;Sung, Soon Ki;Kwak, Jong Hyeok
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.503-508
    • /
    • 2019
  • In short a binary value according to a change in the tube voltage by using one of VOLUME AXIAL MODE of scanning techniques of chest CT image quality evaluation in order to obtain high image and to present the appropriate tube voltage. CT instruments were GE Revolution (GE Healthcare, Wisconsin USA) model and Phantom used Pediatric Whole Body Phantom PBU-70. The test method was examined in Volume Axial mode using the pediatric protocol used in the Y university hospital of mass-produced material. The tube voltage was set to 70kvp, 80kvp, 100kvp, and mAs was set to smart mA-ODM. The mean SNR difference of the heart was $-4.53{\pm}0.26$ at 70 kvp, $-3.34{\pm}0.18$ at 80 kvp, $-1.87{\pm}0.15$ at 100 kvp, and SNR at 70 kvp was about -2.66 higher than 100 kvp and statistically significant (p<0.05) In the Lung SNR mean difference analysis, $-78.20{\pm}4.16$ at 70 kvp, $-79.10{\pm}4.39$ at 80 kvp, $-77.43{\pm}4.72$ at 100 kvp, and SNR at 70 kvp at about -0.77 higher than 100 kvp were statistically significant. (p<0.05). Lung CNR mean difference was $73.67{\pm}3.95$ at 70 kvp, $75.76{\pm}4.25$ at 80 kvp, $75.57{\pm}4.62$ at 100 kvp and 20.9 CNR at 80 kvp higher than 70 kvp and statistically significant (p<0.05) At 100 kvp of tube voltage, the SNR was close to 1 while maintaining the quality of the heart image when 70 kvp and 80 kvp were compared. However, there is no difference in SNR between 70 kvp and 80 kvp, and 70 kvp can be used to reduce the radiation dose. On the other and, CNR showed an approximate value of 1 at 70 kvp. There is no difference between 80 kvp and 100 kvp. Therefore, 80 kvp can reduce the radiation dose by pediatric chest CT. In addition, it is possible to perform a scan with a short scan time of 0.3 seconds in the volume axial mode test, which is useful for pediatric patients who need to move or relax.

An Experimental Study on the Hydration Heat of Concrete Using Phosphate based Inorganic Salt (인산계 무기염을 이용한 콘크리트의 수화 발열 특성에 관한 실험적 연구)

  • Jeong, Seok-Man;Kim, Se-Hwan;Yang, Wan-Hee;Kim, Young-Sun;Ki, Jun-Do;Lee, Gun-Cheol
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.6
    • /
    • pp.489-495
    • /
    • 2020
  • Whereas the control of the hydration heat in mass concrete has been important as the concrete structures enlarge, many conventional strategies show some limitations in their effectiveness and practicality. Therefore, In this study, as a solution of controling the heat of hydration of mass concrete, a method to reduce the heat of hydration by controlling the hardening of cement was examined. The reduction of the hydration heat by the developed Phosphate Inorganic Salt was basically verified in the insulated boxes filled with binder paste or concrete mixture. That is, the effects of the Phosphate Inorganic Salt on the hydration heat, flow or slump, and compressive strength were analyzed in binary and ternary blended cement which is generally used for low heat. As a result, the internal maximum temperature rise induced by the hydration heat was decreased by 9.5~10.6% and 10.1~11.7% for binder paste and concrete mixed with the Phosphate Inorganic Salt, respectively. Besides, the delay of the time corresponding to the peak temperature was apparently observed, which is beneficial to the emission of the internal hydration heat in real structures. The Phosphate Inorganic Salt that was developed and verified by a series of the aforementioned experiments showed better performance than the existing ones in terms of the control of the hydration heat and other performance. It can be used for the purpose of hydration heat of mass concrete in the future.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.