• Title/Summary/Keyword: 정확도평가

Search Result 3,585, Processing Time 0.05 seconds

A Proposed Algorithm and Sampling Conditions for Nonlinear Analysis of EEG (뇌파의 비선형 분석을 위한 신호추출조건 및 계산 알고리즘)

  • Shin, Chul-Jin;Lee, Kwang-Ho;Choi, Sung-Ku;Yoon, In-Young
    • Sleep Medicine and Psychophysiology
    • /
    • v.6 no.1
    • /
    • pp.52-60
    • /
    • 1999
  • Objectives: With the object of finding the appropriate conditions and algorithms for dimensional analysis of human EEG, we calculated correlation dimensions in the various condition of sampling rate and data aquisition time and improved the computation algorithm by taking advantage of bit operation instead of log operation. Methods: EEG signals from 13 scalp lead of a man were digitized with A-D converter under the condition of 12 bit resolution and 1000 Hertz of sampling rate during 32 seconds. From the original data, we made 15 time series data which have different sampling rate of 62.5, 125, 250, 500, 1000 hertz and data acqusition time of 10, 20, 30 second, respectively. New algorithm to shorten the calculation time using bit operation and the Least Trimmed Squares(LTS) estimator to get the optimal slope was applied to these data. Results: The values of the correlation dimension showed the increasing pattern as the data acquisition time becomes longer. The data with sampling rate of 62.5 Hz showed the highest value of correlation dimension regardless of sampling time but the correlation dimension at other sampling rates revealed similar values. The computation with bit operation instead of log operation had a statistically significant effect of shortening of calculation time and LTS method estimated more stably the slope of correlation dimension than the Least Squares estimator. Conclusion: The bit operation and LTS methods were successfully utilized to time-saving and efficient calculation of correlation dimension. In addition, time series of 20-sec length with sampling rate of 125 Hz was adequate to estimate the dimensional complexity of human EEG.

  • PDF

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

Extraction of Landmarks Using Building Attribute Data for Pedestrian Navigation Service (보행자 내비게이션 서비스를 위한 건물 속성정보를 이용한 랜드마크 추출)

  • Kim, Jinhyeong;Kim, Jiyoung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.203-215
    • /
    • 2017
  • Recently, interest in Pedestrian Navigation Service (PNS) is being increased due to the diffusion of smart phone and the improvement of location determination technology and it is efficient to use landmarks in route guidance for pedestrians due to the characteristics of pedestrians' movement and success rate of path finding. Accordingly, researches on extracting landmarks have been progressed. However, preceding researches have a limit that they only considered the difference between buildings and did not consider visual attention of maps in display of PNS. This study improves this problem by defining building attributes as local variable and global variable. Local variables reflect the saliency of buildings by representing the difference between buildings and global variables reflects the visual attention by representing the inherent characteristics of buildings. Also, this study considers the connectivity of network and solves the overlapping problem of landmark candidate groups by network voronoi diagram. To extract landmarks, we defined building attribute data based on preceding researches. Next, we selected a choice point for pedestrians in pedestrian network data, and determined landmark candidate groups at each choice point. Building attribute data were calculated in the extracted landmark candidate groups and finally landmarks were extracted by principal component analysis. We applied the proposed method to a part of Gwanak-gu, Seoul and this study evaluated the extracted landmarks by making a comparison with labels and landmarks used by portal sites such as the NAVER and the DAUM. In conclusion, 132 landmarks (60.3%) among 219 landmarks of the NAVER and the DAUM were extracted by the proposed method and we confirmed that 228 landmarks which there are not labels or landmarks in the NAVER and the DAUM were helpful to determine a change of direction in path finding of local level.

$CO_2$ Transport for CCS Application in Republic of Korea (이산화탄소 포집 및 저장 실용화를 위한 대한민국에서의 이산화탄소 수송)

  • Huh, Cheol;Kang, Seong-Gil;Cho, Mang-Ik
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.1
    • /
    • pp.18-29
    • /
    • 2010
  • Offshore subsurface storage of $CO_2$ is regarded as one of the most promising options to response severe climate change. Marine geological storage of $CO_2$ is to capture $CO_2$ from major point sources, to transport to the storage sites and to store $CO_2$ into the offshore subsurface geological structure such as the depleted gas reservoir and deep sea saline aquifer. Since 2005, we have developed relevant technologies for marine geological storage of $CO_2$. Those technologies include possible storage site surveys and basic designs for $CO_2$ transport and storage processes. To design a reliable $CO_2$ marine geological storage system, we devised a hypothetical scenario and used a numerical simulation tool to study its detailed processes. The process of transport $CO_2$ from the onshore capture sites to the offshore storage sites can be simulated with a thermodynamic equation of state. Before going to main calculation of process design, we compared and analyzed the relevant equation of states. To evaluate the predictive accuracies of the examined equation of states, we compare the results of numerical calculations with experimental reference data. Up to now, process design for this $CO_2$ marine geological storage has been carried out mainly on pure $CO_2$. Unfortunately the captured $CO_2$ mixture contains many impurities such as $N_2$, $O_2$, Ar, $H_{2}O$, $SO_{\chi}$, $H_{2}S$. A small amount of impurities can change the thermodynamic properties and then significantly affect the compression, purification and transport processes. This paper analyzes the major design parameters that are useful for constructing onshore and offshore $CO_2$ transport systems. On the basis of a parametric study of the hypothetical scenario, we suggest relevant variation ranges for the design parameters, particularly the flow rate, diameter, temperature, and pressure.

Three-Dimensional Dosimetry Using Magnetic Resonance Imaging of Polymer Gel (중합체 겔과 자기공명영상을 이용한 3차원 선량분포 측정)

  • Oh Young-Taek;Kang Haejin;Kim Miwha;Chun Mison;Kang Seung-Hee;Suh Chang Ok;Chu Seong Sil;Seong Jinsil;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.264-273
    • /
    • 2002
  • Purpose : Three-dimensional radiation dosimetry using magnetic resonance imaging of polymer gel was recently introduced. This dosimetry system is based on radiation induced chain polymerization of acrylic monomers in a muscle equivalent gel and provide accurate 3 dimensional dose distribution. We planned this study to evaluate the clinical value of this 3-dimensional dosimetry. Materials and Methods: The polymer gel poured into a cylindrical glass flask and a spherical glass flask. The cylindrical test tubes were for dose response evaluation and the spherical flasks, which is comparable to the human head, were for isodose curves. T2 maps from MR images were calculated using software, IDL. Dose distributions have been displayed for dosimetry. The same spherical flask of gel and the same irradiation technique was used for film and TLD dosimetry and compared with each other. Results : The R2 of the gel respond linearly with radiation doses in the range of 2 to 15 Gy. The repeated dosimetry of spherical gel showed the same isodose curves. These isodose curves were identical to dose distributions from treatment planning system especially high dose range. In addition, the gel dosimetry system showed comparable or superior results with the film and TLD dosimetry. Conclusion : The 3-dimensional dosimetry for conformal radiation therapy using MRI of polymer gal showed stable and accurate results. Although more studies are needed for convenient clinical application, it appears to be a useful tool for conformal radiation therapy.

$^{99m}Tc$-HMPAO-labelled Leucocyte Scintigraphy in the Diagnosis of Infection after Total Knee Replacement Arthroplasty (인공슬관절 전치환술 환자에서 $^{99m}Tc$-HMPAO-백혈구 스캔을 이용한 인공관절 감염의 진단)

  • Park, Dong-Rib;Kim, Jae-Seung;Ryu, Jin-Sook;Moon, Dae-Hyuk;Bin, Seong-Il;Cho, Woo-Shin;Lee, Hee-Kyung
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.4
    • /
    • pp.413-421
    • /
    • 1999
  • Purpose: This study was performed to evaluate the usefulness of $^{99m}Tc$-HMPAO-labelled leucocyte scintigraphy for diagnosing prosthetic infection after total knee replacement arthroplasty without the aid of following bone marrow scintigraphy Materials and Methods: The study subjects were 25 prostheses of 17 patients (one man and 16 women, mean age. 65 years) who had total knee replacement arthroplasty. After injection of $^{99m}Tc$-HMPAO-labelled leucocyte, the whole body planar and knee SPECT images were obtained in all patients. The subjects were classified into three groups according to clinical suspicion of prosthetic infection. Group A (n=11) with high suspicion of infection; Group B (n=6) with equivocal suspicion of infection, and Group C (n=8) with asymptomatic contralateral prostheses. Final diagnosis of infection was based on surgical, histological and bacteriological data and clinical follow-up. Results: Infection was confirmed in 13 prostheses (11 in Group A and 2 in Group B). All prostheses in Group A were true positive. There were two true positives, one false positive and three true negatives in Group B, and six true negatives and two false positives in Group C. Overall sensitivity, specificity, and accuracy for diagnosis of the infected knee prosthesis were 100%, 75% and 88%, respectively Conclusion: $^{99m}Tc$-HMPAO-labelled leucocyte scintigraphy is a sensitive method for the diagnosis of infected knee prosthesis. However, false positive uptakes even in asymptomatic prosthesis suggest that bone marrow scintigraphy may be needed to achieve improved specificity.

  • PDF

Characteristics of the Differences between Significant Wave Height at Ieodo Ocean Research Station and Satellite Altimeter-measured Data over a Decade (2004~2016) (이어도 해양과학기지 관측 파고와 인공위성 관측 유의파고 차이의 특성 연구 (2004~2016))

  • WOO, HYE-JIN;PARK, KYUNG-AE;BYUN, DO-SEONG;LEE, JOOYOUNG;LEE, EUNIL
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.1
    • /
    • pp.1-19
    • /
    • 2018
  • In order to compare significant wave height (SWH) data from multi-satellites (GFO, Jason-1, Envisat, Jason-2, Cryosat-2, SARAL) and SWH measurements from Ieodo Ocean Research Station (IORS), we constructed a 12 year matchup database between satellite and IORS measurements from December 2004 to May 2016. The satellite SWH showed a root mean square error (RMSE) of about 0.34 m and a positive bias of 0.17 m with respect to the IORS wave height. The satellite data and IORS wave height data did not show any specific seasonal variations or interannual variability, which confirmed the consistency of satellite data. The effect of the wind field on the difference of the SWH data between satellite and IORS was investigated. As a result, a similar result was observed in which a positive biases of about 0.17 m occurred on all satellites. In order to understand the effects of topography and the influence of the construction structures of IORS on the SWH differences, we investigated the directional dependency of differences of wave height, however, no statistically significant characteristics of the differences were revealed. As a result of analyzing the characteristics of the error as a function of the distance between the satellite and the IORS, the biases are almost constant about 0.14 m regardless of the distance. By contrast, the amplitude of the SWH differences, the maximum value minus the minimum value at a given distance range, was found to increase linearly as the distance was increased. On the other hand, as a result of the accuracy evaluation of the satellite SWH from the Donghae marine meteorological buoy of Korea Meteorological Administration, the satellite SWH presented a relatively small RMSE of about 0.27 m and no specific characteristics of bias such as the validation results at IORS. In this paper, we propose a conversion formula to correct the significant wave data of IORS with the satellite SWH data. In addition, this study emphasizes that the reliability of data should be prioritized to be extensively utilized and presents specific methods and strategies in order to upgrade the IORS as an international world-wide marine observation site.

Classification of Urban Green Space Using Airborne LiDAR and RGB Ortho Imagery Based on Deep Learning (항공 LiDAR 및 RGB 정사 영상을 이용한 딥러닝 기반의 도시녹지 분류)

  • SON, Bokyung;LEE, Yeonsu;IM, Jungho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.3
    • /
    • pp.83-98
    • /
    • 2021
  • Urban green space is an important component for enhancing urban ecosystem health. Thus, identifying the spatial structure of urban green space is required to manage a healthy urban ecosystem. The Ministry of Environment has provided the level 3 land cover map(the highest (1m) spatial resolution map) with a total of 41 classes since 2010. However, specific urban green information such as street trees was identified just as grassland or even not classified them as a vegetated area in the map. Therefore, this study classified detailed urban green information(i.e., tree, shrub, and grass), not included in the existing level 3 land cover map, using two types of high-resolution(<1m) remote sensing data(i.e., airborne LiDAR and RGB ortho imagery) in Suwon, South Korea. U-Net, one of image segmentation deep learning approaches, was adopted to classify detailed urban green space. A total of three classification models(i.e., LRGB10, LRGB5, and RGB5) were proposed depending on the target number of classes and the types of input data. The average overall accuracies for test sites were 83.40% (LRGB10), 89.44%(LRGB5), and 74.76%(RGB5). Among three models, LRGB5, which uses both airborne LiDAR and RGB ortho imagery with 5 target classes(i.e., tree, shrub, grass, building, and the others), resulted in the best performance. The area ratio of total urban green space(based on trees, shrub, and grass information) for the entire Suwon was 45.61%(LRGB10), 43.47%(LRGB5), and 44.22%(RGB5). All models were able to provide additional 13.40% of urban tree information on average when compared to the existing level 3 land cover map. Moreover, these urban green classification results are expected to be utilized in various urban green studies or decision making processes, as it provides detailed information on urban green space.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Quality Evaluation through Inter-Comparison of Satellite Cloud Detection Products in East Asia (동아시아 지역의 위성 구름탐지 산출물 상호 비교를 통한 품질 평가)

  • Byeon, Yugyeong;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Woo, Jongho;Jeon, Uujin;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_2
    • /
    • pp.1829-1836
    • /
    • 2021
  • Cloud detection means determining the presence or absence of clouds in a pixel in a satellite image, and acts as an important factor affecting the utility and accuracy of the satellite image. In this study, among the satellites of various advanced organizations that provide cloud detection data, we intend to perform quantitative and qualitative comparative analysis on the difference between the cloud detection data of GK-2A/AMI, Terra/MODIS, and Suomi-NPP/VIIRS. As a result of quantitative comparison, the Proportion Correct (PC) index values in January were 74.16% for GK-2A & MODIS, 75.39% for GK-2A & VIIRS, and 87.35% for GK-2A & MODIS in April, and GK-2A & VIIRS showed that 87.71% of clouds were detected in April compared to January without much difference by satellite. As for the qualitative comparison results, when compared with RGB images, it was confirmed that the results corresponding to April rather than January detected clouds better than the previous quantitative results. However, if thin clouds or snow cover exist, each satellite were some differences in the cloud detection results.