• Title/Summary/Keyword: Lead optimization

Search Result 399, Processing Time 0.028 seconds

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Optimization of Cookie Preparation by Addition of Yam Powder (마분말 첨가 쿠키 제조조건 최적화)

  • Joo, Na-Mi;Lee, Sun-Mee;Jung, Hee-Sun;Park, Sang-Hyun;Song, Yun-Hee;Shin, Ji-Hun;Jung, Hyeon-A
    • Food Science and Preservation
    • /
    • v.15 no.1
    • /
    • pp.49-57
    • /
    • 2008
  • This study was conducted to develop an optimal composite recipe for a cookie including yam powder that would be attractive to all age groups. Wheat flour was partially substituted by yam powder to reduce the content of wheat flour. This study has produced the sensory optimal composite recipe by making cookies, respectively with each 5 level of yam powder $(X_1)$, Sugar$(X_2)$, butter$(X_3)$, by C.C.D (Central Composite Design) and conducting sensory evaluation and instrumental analysis by means of RSM (Response Surface Methodology). Sensory items showed very significant values in color, softness, overall quality (p<0.01), flavor (p<0.05) and those of instrumental analysis showed significant values in lightness, redness (p<0.05), spread ratio, hardness (p<0.01). Also sensory optimal ratio of yam cookie was calculated at yam powder 37.35 g, sugar 50.75 g, butter 78.40 g and it was revealed that the factors of influencing yam cookie aptitude were in older of yam powder, butter, sugar.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

A Study of the Environmental Consciousness Influences on the Psychological Reaction of Forest Ecotourists (환경의식에 따른 산림생태관광객의 심리적 반응에 관한 연구)

  • Yan, Guang-Hao;Na, Seung-Hwa
    • Journal of Distribution Science
    • /
    • v.10 no.1
    • /
    • pp.43-52
    • /
    • 2012
  • With the slowdown in environmental issues and the change of environmental consciousness, ecotourism is being discussed in various social fields. Ecotourism is being popularized for environmental protection, and now it is becoming a mainstream product from one of mass tourism. Ecotourism's emphasis on sustainable development in the tourism destination's society, economy, and environment, through ecotourism study and education, enable people to understand the core value of the ecological environment. 2011 was nominated as "the Year of World Forest" by the UN. In the recent years, forests are becoming increasingly important with their own values and functions in environment, economy, society, and culture. In particular, the global environmental issues caused by climate change are becoming an international agenda. Forests are the only effective solution for the carbon dioxide that causes global warming. Moreover, forests constitute a major part of ecotourism, and are now most used by ecotourists. For example, Korea, wherein 60% of the land is forest, attracts ecotourists. With the increasing interests in environment, the number of tourists visiting the ecosystem forest, which is highly valued for its conservation, is increasing significantly every year and is receiving considerable attention from the government. However, poor facilities in the forest ecotourism sites and improper market strategies are the reasons for the poor running of these sites. Furthermore, tourists' environmental awareness affects ecology environmental pollution or the optimization of forest ecotourism. In order to verify the relationships among tourist attractiveness, environmental consciousness, charm degrees of the attractions, and attitudes after tours, we established some scales based on existing research achievement. Then, using these scales, the researcher completed the questionnaire survey. From December 20, 2010 to February 20, 2011, after conducting surveys for 12 weeks, we finally obtained 582 valid questionnaires, from a total of 700 questionnaires, that could be used in statistical analysis. First, for the method of research and analysis, the researcher initially applied the Cronbach's (Alpha) for verifying the reliability, and subsequently applied the Exploratory factor analysis for verifying the validity. Second, in order to analyze the demographics, the researcher makes use of the Frequency analysis for the AMOS, measurement model, structural equation model computing, and also utilizes construct validity, convergent validity, discriminant validity, and nomological validity. Third, for the analysis of the ecotourists' environmental consciousness, impacts on tourist attractiveness, charm degrees of the attractions, and attitudes after the tour, the researcher uses AMOS 19, with the path analysis and equation of structure. After the research, researchers found that high awareness of natural protection lead to high tourist motivation and satisfaction and more positive attitude after the tour. Moreover, this research shows the psychological and behavioral reactions of the ecotourists to the ecotourist development. Accordingly, environmental consciousness does not affect the tourist attractiveness that has been interpreted as significant. Furthermore, people should focus on the change of natural protection consciousness and psychological reaction of ecotourists while ensuring the sustainable development of ecotourists and developing some ecotourist programs.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Low temperature plasma deposition of microcrystalline silicon thin films for active matrix displays: opportunities and challenges

  • Cabarrocas, Pere Roca I;Abramov, Alexey;Pham, Nans;Djeridane, Yassine;Moustapha, Oumkelthoum;Bonnassieux, Yvan;Girotra, Kunal;Chen, Hong;Park, Seung-Kyu;Park, Kyong-Tae;Huh, Jong-Moo;Choi, Joon-Hoo;Kim, Chi-Woo;Lee, Jin-Seok;Souk, Jun-H.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.107-108
    • /
    • 2008
  • The spectacular development of AMLCDs, been made possible by a-Si:H technology, still faces two major drawbacks due to the intrinsic structure of a-Si:H, namely a low mobility and most important a shift of the transfer characteristics of the TFTs when submitted to bias stress. This has lead to strong research in the crystallization of a-Si:H films by laser and furnace annealing to produce polycrystalline silicon TFTs. While these devices show improved mobility and stability, they suffer from uniformity over large areas and increased cost. In the last decade we have focused on microcrystalline silicon (${\mu}c$-Si:H) for bottom gate TFTs, which can hopefully meet all the requirements for mass production of large area AMOLED displays [1,2]. In this presentation we will focus on the transfer of a deposition process based on the use of $SiF_4$-Ar-$H_2$ mixtures from a small area research laboratory reactor into an industrial gen 1 AKT reactor. We will first discuss on the optimization of the process conditions leading to fully crystallized films without any amorphous incubation layer, suitable for bottom gate TFTS, as well as on the use of plasma diagnostics to increase the deposition rate up to 0.5 nm/s [3]. The use of silicon nanocrystals appears as an elegant way to circumvent the opposite requirements of a high deposition rate and a fully crystallized interface [4]. The optimized process conditions are transferred to large area substrates in an industrial environment, on which some process adjustment was required to reproduce the material properties achieved in the laboratory scale reactor. For optimized process conditions, the homogeneity of the optical and electronic properties of the ${\mu}c$-Si:H films deposited on $300{\times}400\;mm$ substrates was checked by a set of complementary techniques. Spectroscopic ellipsometry, Raman spectroscopy, dark conductivity, time resolved microwave conductivity and hydrogen evolution measurements allowed demonstrating an excellent homogeneity in the structure and transport properties of the films. On the basis of these results, optimized process conditions were applied to TFTs, for which both bottom gate and top gate structures were studied aiming to achieve characteristics suitable for driving AMOLED displays. Results on the homogeneity of the TFT characteristics over the large area substrates and stability will be presented, as well as their application as a backplane for an AMOLED display.

  • PDF

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Shielding Capability Evaluation of Mobile X-ray Generator through the Production assembled Shield (일체형 방어벽 제작을 통한 이동형 엑스선 발생기의 차폐능 평가)

  • Kim, Seung-Uk;Han, Byeoung-Ju
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.7
    • /
    • pp.895-908
    • /
    • 2018
  • As modern science is developed and advanced, examination and number of times using radiation are increasing daily. General diagnostic X-ray generator is installed on stationary form, But X-ray generator was developed because patient who is in the intensive care unit, operation room, emergency room can not move to general x-ray room. What we examine patient by x-ray generator is certainly necessary, So patient exposure is inevitable. but reducing radiation exposure is highly important matter about radiation technology, guardian, patient in the same hospital room, nurse etc. For this reason, rule regarding safety control of diagnostic x-ray generator revised for radiation worker, patient and protector proclaim that mobile diagnostic x-ray shield must placed in case of examine different location excluding operation room, emergency room, intensive care unit. But, radiogical technologist is having a lot of difficulties to examine with mobile x-ray generator, diagnostic x-ray shield partition, image plate and lead apron. So, when we use x-ray generator, we manufacture shield tools can be attached to the mobile x-ray generator On behalf of x-ray shield partition and conduct analysis and in comparison to part of body and distribution of dose rate and find way to reduce radiation exposure through distribution of dose rate of patient within the radiogical technologist, medical team. Mobile x-ray generator aimed at SHIMADZU inc. R-20, We manufactured equipment for shielding x-ray scattered x-ray by installing shielding wall from side to side based on support beam on the mobile x-ray generator. Shielding wall when moving can be folded and designed to expand when examine. Experiment measured five times in each by an angle for dose rate of eyes, thyroid, breast, abdomen and gonad on exposure condition of upper and lower extremity, chest, abdomen which is examined many times by mobile x-ray generator. We used dosimeter RSM-100 made by IJRAD and measured a horizontal dose rate by body part. The result of an experiment, shielding decreasing rate of the front and the rear showed 77 ~ 98.7%. Therefore using self-production shielding wall reduce scattered x-ray occurrence rate and confirm can decrease exposure dose consequently. Therefore, through this study, reduction result which is used shielding wall of self-production will be a role of shielding optimization and it could be answer about reduction of medical exposure recommended by ICRP 103.

Quantitative Differences between X-Ray CT-Based and $^{137}Cs$-Based Attenuation Correction in Philips Gemini PET/CT (GEMINI PET/CT의 X-ray CT, $^{137}Cs$ 기반 511 keV 광자 감쇠계수의 정량적 차이)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Park, Eun-Kyung;Kim, Jong-Hyo;Kim, Jae-Il;Lee, Hong-Jae;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • Purpose: There are differences between Standard Uptake Value (SUV) of CT attenuation corrected PET and that of $^{137}Cs$. Since various causes lead to difference of SUV, it is important to know what is the cause of these difference. Since only the X-ray CT and $^{137}Cs$ transmission data are used for the attenuation correction, in Philips GEMINI PET/CT scanner, proper transformation of these data into usable attenuation coefficients for 511 keV photon has to be ascertained. The aim of this study was to evaluate the accuracy in the CT measurement and compare the CT and $^{137}Cs$-based attenuation correction in this scanner. Methods: For all the experiments, CT was set to 40 keV (120 kVp) and 50 mAs. To evaluate the accuracy of the CT measurement, CT performance phantom was scanned and Hounsfield units (HU) for those regions were compared to the true values. For the comparison of CT and $^{137}Cs$-based attenuation corrections, transmission scans of the elliptical lung-spine-body phantom and electron density CT phantom composed of various components, such as water, bone, brain and adipose, were performed using CT and $^{137}Cs$. Transformed attenuation coefficients from these data were compared to each other and true 511 keV attenuation coefficient acquired using $^{68}Ge$ and ECAT EXACT 47 scanner. In addition, CT and $^{137}Cs$-derived attenuation coefficients and SUV values for $^{18}F$-FDG measured from the regions with normal and pathological uptake in patients' data were also compared. Results: HU of all the regions in CT performance phantom measured using GEMINI PET/CT were equivalent to the known true values. CT based attenuation coefficients were lower than those of $^{68}Ge$ about 10% in bony region of NEMA ECT phantom. Attenuation coefficients derived from $^{137}Cs$ data was slightly higher than those from CT data also in the images of electron density CT phantom and patients' body with electron density. However, the SUV values in attenuation corrected images using $^{137}Cs$ were lower than images corrected using CT. Percent difference between SUV values was about 15%. Conclusion: Although the HU measured using this scanner was accurate, accuracy in the conversion from CT data into the 511 keV attenuation coefficients was limited in the bony region. Discrepancy in the transformed attenuation coefficients and SUV values between CT and $^{137}Cs$-based data shown in this study suggests that further optimization of various parameters in data acquisition and processing would be necessary for this scanner.