• Title/Summary/Keyword: Preprocessing method

Search Result 1,081, Processing Time 0.03 seconds

Reading Deviations of Glass Rod Dosimeters Using Different Pre-processing Methods for Radiotherapeutic in-vivo Dosimetry (유리선량계의 전처리 방법이 방사선 치료 선량 측정에 미치는 영향)

  • Jeon, Hosang;Nam, Jiho;Park, Dahl;Kim, Yong Ho;Kim, Wontaek;Kim, Dongwon;Ki, Yongkan;Kim, Donghyun;Lee, Ju Hye
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.92-98
    • /
    • 2013
  • The experimental verification of treatment planning on the treatment spot is the ultimate method to assure quality of radiotherapy, so in-vivo skin dose measurement is the essential procedure to confirm treatment dose. In this study, glass rod dosimeter (GRD), which is a kind of photo-luminescent based dosimeters, was studied to produce a guideline to use GRDs in vivo dosimetry for quality assurance of radiotherapy. The pre-processing procedure is essential to use GRDs. This is a heating operation for stabilization. Two kinds of pre-processing methods are recommended by manufacturer: a heating method (70 degree, 30 minutes) and a waiting method (room temperature, 24 hours). We equally irradiated 1.0 Gy to 20 GRD elements, and then different preprocessing were performed to 10 GRDs each. In heating method, reading deviation of GRDs at same time were relatively high, but the deviation was very low as time went on. In waiting method, the deviation among GRDs was low, but the deviation was relatively high as time went on. The meaningful difference was found between mean reading values of two pre-processing methods. Both methods present mean dose deviation under 5%, but the relatively high effect by reading time was observed in waiting method. Finally, GRD is best to perform in-vivo dosimetry in the viewpoint of accuracy and efficiency, and the understanding of how pre-processing affect the accuracy is asked to perform most accurate in-vivo dosimetry. The further study is asked to acquire more stable accuracy in spite of different irradiation conditions for GRD usage.

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder (HEVC 실시간 소프트웨어 인코더에서 GOP 병렬 부호화를 지원하는 R-lambda 모델 기반의 율 제어 방법)

  • Kim, Dae-Eun;Chang, Yongjun;Kim, Munchurl;Lim, Woong;Kim, Hui Yong;Seok, Jin Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.193-206
    • /
    • 2017
  • In this paper, we propose a rate control method based on the $R-{\lambda}$ model that supports a parallel encoding structure in GOP levels or IDR period levels for 4K UHD input video in real-time. For this, a slice-level bit allocation method is proposed for parallel encoding instead of sequential encoding. When a rate control algorithm is applied in the GOP level or IDR period level parallelism, the information of how many bits are consumed cannot be shared among the frames belonging to a same frame level except the lowest frame level of the hierarchical B structure. Therefore, it is impossible to manage the bit budget with the existing bit allocation method. In order to solve this problem, we improve the bit allocation procedure of the conventional ones that allocate target bits sequentially according to the encoding order. That is, the proposed bit allocation strategy is to assign the target bits in GOPs first, then to distribute the assigned target bits from the lowest depth level to the highest depth level of the HEVC hierarchical B structure within each GOP. In addition, we proposed a processing method that is used to improve subjective image qualities by allocating the bits according to the coding complexities of the frames. Experimental results show that the proposed bit allocation method works well for frame-level parallel HEVC software encoders and it is confirmed that the performance of our rate controller can be improved with a more elaborate bit allocation strategy by using the preprocessing results.

Comparative study of flood detection methodologies using Sentinel-1 satellite imagery (Sentinel-1 위성 영상을 활용한 침수 탐지 기법 방법론 비교 연구)

  • Lee, Sungwoo;Kim, Wanyub;Lee, Seulchan;Jeong, Hagyu;Park, Jongsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.181-193
    • /
    • 2024
  • The increasing atmospheric imbalance caused by climate change leads to an elevation in precipitation, resulting in a heightened frequency of flooding. Consequently, there is a growing need for technology to detect and monitor these occurrences, especially as the frequency of flooding events rises. To minimize flood damage, continuous monitoring is essential, and flood areas can be detected by the Synthetic Aperture Radar (SAR) imagery, which is not affected by climate conditions. The observed data undergoes a preprocessing step, utilizing a median filter to reduce noise. Classification techniques were employed to classify water bodies and non-water bodies, with the aim of evaluating the effectiveness of each method in flood detection. In this study, the Otsu method and Support Vector Machine (SVM) technique were utilized for the classification of water bodies and non-water bodies. The overall performance of the models was assessed using a Confusion Matrix. The suitability of flood detection was evaluated by comparing the Otsu method, an optimal threshold-based classifier, with SVM, a machine learning technique that minimizes misclassifications through training. The Otsu method demonstrated suitability in delineating boundaries between water and non-water bodies but exhibited a higher rate of misclassifications due to the influence of mixed substances. Conversely, the use of SVM resulted in a lower false positive rate and proved less sensitive to mixed substances. Consequently, SVM exhibited higher accuracy under conditions excluding flooding. While the Otsu method showed slightly higher accuracy in flood conditions compared to SVM, the difference in accuracy was less than 5% (Otsu: 0.93, SVM: 0.90). However, in pre-flooding and post-flooding conditions, the accuracy difference was more than 15%, indicating that SVM is more suitable for water body and flood detection (Otsu: 0.77, SVM: 0.92). Based on the findings of this study, it is anticipated that more accurate detection of water bodies and floods could contribute to minimizing flood-related damages and losses.

Seismic interval velocity analysis on prestack depth domain for detecting the bottom simulating reflector of gas-hydrate (가스 하이드레이트 부존층의 하부 경계면을 규명하기 위한 심도영역 탄성파 구간속도 분석)

  • Ko Seung-Won;Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.638-642
    • /
    • 2005
  • For gas hydrate exploration, long offset multichannel seismic data acquired using by the 4km streamer length in Ulleung basin of the East Sea. The dataset was processed to define the BSRs (Bottom Simulating Reflectors) and to estimate the amount of gas hydrates. Confirmation of the presence of Bottom Simulating reflectors (BSR) and investigation of its physical properties from seismic section are important for gas hydrate detection. Specially, faster interval velocity overlying slower interval velocity indicates the likely presences of gas hydrate above BSR and free gas underneath BSR. In consequence, estimation of correct interval velocities and analysis of their spatial variations are critical processes for gas hydrate detection using seismic reflection data. Using Dix's equation, Root Mean Square (RMS) velocities can be converted into interval velocities. However, it is not a proper way to investigate interval velocities above and below BSR considering the fact that RMS velocities have poor resolution and correctness and the assumption that interval velocities increase along the depth. Therefore, we incorporated Migration Velocity Analysis (MVA) software produced by Landmark CO. to estimate correct interval velocities in detail. MVA is a process to yield velocities of sediments between layers using Common Mid Point (CMP) gathered seismic data. The CMP gathered data for MVA should be produced after basic processing steps to enhance the signal to noise ratio of the first reflections. Prestack depth migrated section is produced using interval velocities and interval velocities are key parameters governing qualities of prestack depth migration section. Correctness of interval velocities can be examined by the presence of Residual Move Out (RMO) on CMP gathered data. If there is no RMO, peaks of primary reflection events are flat in horizontal direction for all offsets of Common Reflection Point (CRP) gathers and it proves that prestack depth migration is done with correct velocity field. Used method in this study, Tomographic inversion needs two initial input data. One is the dataset obtained from the results of preprocessing by removing multiples and noise and stacked partially. The other is the depth domain velocity model build by smoothing and editing the interval velocity converted from RMS velocity. After the three times iteration of tomography inversion, Optimum interval velocity field can be fixed. The conclusion of this study as follow, the final Interval velocity around the BSR decreased to 1400 m/s from 2500 m/s abruptly. BSR is showed about 200m depth under the seabottom

  • PDF

Change Detection of Urban Development over Large Area using KOMPSAT Optical Imagery (KOMPSAT 광학영상을 이용한 광범위지역의 도시개발 변화탐지)

  • Han, Youkyung;Kim, Taeheon;Han, Soohee;Song, Jeongheon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_3
    • /
    • pp.1223-1232
    • /
    • 2017
  • This paper presents an approach to detect changes caused by urban development over a large area using KOMPSAT optical images. In order to minimize the radiometric dissimilarities between the images acquired at different times, we apply the grid-based rough radiometric correction as a preprocessing to detect changes in a large area. To improve the accuracy of the change detection results for urban development, we mask-out non-interest areas such as water and forest regions by the use of land-cover map provided by the Ministry of Environment. The Change Vector Analysis(CVA) technique is applied to detect changes caused by urban development. To confirm the effectiveness of the proposed approach, a total of three study sites from Sejong City is constructed by combining KOMPSAT-2 images acquired on May 2007 and May 2016 and a KOMPSAT-3 image acquired on March 2014. As a result of the change detection accuracy evaluation for the study site generated from the KOMPSAT-2 image acquired on May 2007 and the KOMPSAT-3 image acquired on March 2014, the overall accuracy of change detection was about 91.00%. It is demonstrated that the proposed method is able to effectively detect urban development changes in a large area.

Fault Localization for Self-Managing Based on Bayesian Network (베이지안 네트워크 기반에 자가관리를 위한 결함 지역화)

  • Piao, Shun-Shan;Park, Jeong-Min;Lee, Eun-Seok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.137-146
    • /
    • 2008
  • Fault localization plays a significant role in enormous distributed system because it can identify root cause of observed faults automatically, supporting self-managing which remains an open topic in managing and controlling complex distributed systems to improve system reliability. Although many Artificial Intelligent techniques have been introduced in support of fault localization in recent research especially in increasing complex ubiquitous environment, the provided functions such as diagnosis and prediction are limited. In this paper, we propose fault localization for self-managing in performance evaluation in order to improve system reliability via learning and analyzing real-time streams of system performance events. We use probabilistic reasoning functions based on the basic Bayes' rule to provide effective mechanism for managing and evaluating system performance parameters automatically, and hence the system reliability is improved. Moreover, due to large number of considered factors in diverse and complex fault reasoning domains, we develop an efficient method which extracts relevant parameters having high relationships with observing problems and ranks them orderly. The selected node ordering lists will be used in network modeling, and hence improving learning efficiency. Using the approach enables us to diagnose the most probable causal factor with responsibility for the underlying performance problems and predict system situation to avoid potential abnormities via posting treatments or pretreatments respectively. The experimental application of system performance analysis by using the proposed approach and various estimations on efficiency and accuracy show that the availability of the proposed approach in performance evaluation domain is optimistic.

An Electric Load Forecasting Scheme for University Campus Buildings Using Artificial Neural Network and Support Vector Regression (인공 신경망과 지지 벡터 회귀분석을 이용한 대학 캠퍼스 건물의 전력 사용량 예측 기법)

  • Moon, Jihoon;Jun, Sanghoon;Park, Jinwoong;Choi, Young-Hwan;Hwang, Eenjun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.10
    • /
    • pp.293-302
    • /
    • 2016
  • Since the electricity is produced and consumed simultaneously, predicting the electric load and securing affordable electric power are necessary for reliable electric power supply. In particular, a university campus is one of the highest power consuming institutions and tends to have a wide variation of electric load depending on time and environment. For these reasons, an accurate electric load forecasting method that can predict power consumption in real-time is required for efficient power supply and management. Even though various influencing factors of power consumption have been discovered for the educational institutions by analyzing power consumption patterns and usage cases, further studies are required for the quantitative prediction of electric load. In this paper, we build an electric load forecasting model by implementing and evaluating various machine learning algorithms. To do that, we consider three building clusters in a campus and collect their power consumption every 15 minutes for more than one year. In the preprocessing, features are represented by considering periodic characteristic of the data and principal component analysis is performed for the features. In order to train the electric load forecasting model, we employ both artificial neural network and support vector machine. We evaluate the prediction performance of each forecasting model by 5-fold cross-validation and compare the prediction result to real electric load.

Vehicle Area Segmentation from Road Scenes Using Grid-Based Feature Values (격자 단위 특징값을 이용한 도로 영상의 차량 영역 분할)

  • Kim Ku-Jin;Baek Nakhoon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1369-1382
    • /
    • 2005
  • Vehicle segmentation, which extracts vehicle areas from road scenes, is one of the fundamental opera tions in lots of application areas including Intelligent Transportation Systems, and so on. We present a vehicle segmentation approach for still images captured from outdoor CCD cameras mounted on the supporting poles. We first divided the input image into a set of two-dimensional grids and then calculate the feature values of the edges for each grid. Through analyzing the feature values statistically, we can find the optimal rectangular grid area of the vehicle. Our preprocessing process calculates the statistics values for the feature values from background images captured under various circumstances. For a car image, we compare its feature values to the statistics values of the background images to finally decide whether the grid belongs to the vehicle area or not. We use dynamic programming technique to find the optimal rectangular gird area from these candidate grids. Based on the statistics analysis and global search techniques, our method is more systematic compared to the previous methods which usually rely on a kind of heuristics. Additionally, the statistics analysis achieves high reliability against noises and errors due to brightness changes, camera tremors, etc. Our prototype implementation performs the vehicle segmentation in average 0.150 second for each of $1280\times960$ car images. It shows $97.03\%$ of strictly successful cases from 270 images with various kinds of noises.

  • PDF