• Title/Summary/Keyword: mean squared error (MSE)

Search Result 173, Processing Time 0.02 seconds

Sparse Channel Estimation Based on Combined Measurements in OFDM Systems (OFDM 시스템에서 측정 벡터 결합을 이용한 채널 추정 방법)

  • Min, Byeongcheon;Park, Daeyoung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.1
    • /
    • pp.1-11
    • /
    • 2016
  • We investigate compressive sensing techniques to estimate sparse channel in Orthogonal Frequency Division Multiplexing(OFDM) systems. In the case of large channel delay spread, compressive sensing may not be applicable because it is affected by length of measurement vectors. In this paper, we increase length of measurement vector adding pilot information to OFDM data block. The increased measurement vector improves probability of finding path delay set and Mean Squared Error(MSE) performance. Simulation results show that signal recovery performance of a proposed scheme is better than conventional schemes.

Context-Based Minimum MSE Prediction and Entropy Coding for Lossless Image Coding

  • Musik-Kwon;Kim, Hyo-Joon;Kim, Jeong-Kwon;Kim, Jong-Hyo;Lee, Choong-Woong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.83-88
    • /
    • 1999
  • In this paper, a novel gray-scale lossless image coder combining context-based minimum mean squared error (MMSE) prediction and entropy coding is proposed. To obtain context of prediction, this paper first defines directional difference according to sharpness of edge and gradients of localities of image data. Classification of 4 directional differences forms“geometry context”model which characterizes two-dimensional general image behaviors such as directional edge region, smooth region or texture. Based on this context model, adaptive DPCM prediction coefficients are calculated in MMSE sense and the prediction is performed. The MMSE method on context-by-context basis is more in accord with minimum entropy condition, which is one of the major objectives of the predictive coding. In entropy coding stage, context modeling method also gives useful performance. To reduce the statistical redundancy of the residual image, many contexts are preset to take full advantage of conditional probability in entropy coding and merged into small number of context in efficient way for complexity reduction. The proposed lossless coding scheme slightly outperforms the CALIC, which is the state-of-the-art, in compression ratio.

A Segmented Model with Upside-Down Bathtub Shaped Failure Intensity (Upside-Down 욕조 곡선 형태의 고장 강도를 가지는 세분화 모형)

  • Park, Woo-Jae;Kim, Sang-Boo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.6_2
    • /
    • pp.1103-1110
    • /
    • 2020
  • In this study, a segmented model with Upside-Down bathtub shaped failure intensity for a repairable system are proposed under the assumption that the occurrences of the failures of a repairable system follow the Non-Homogeneous Poisson Process. The proposed segmented model is the compound model of S-PLP and LIP (Segmented Power Law Process and Logistic Intensity Process), that fits the separate failure intensity functions on each segment of time interval. The maximum likelihood estimation is used for estimating the parameters of the S-PLP and LIP model. The case study of system A shows that the S-PLP and LIP model fits better than the other models when compared by AICc (Akaike Information Criterion corrected) and MSE (Mean Squared Error). And it also implies that the S-PLP and LIP model can be useful for explaining the failure intensities of similar systems.

Real-time Smoke Detection Research with False Positive Reduction using Spatial and Temporal Features based on Faster R-CNN

  • Lee, Sang-Hoon;Lee, Yeung-Hak
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1148-1155
    • /
    • 2020
  • Fire must be extinguished as quickly as possible because they cause a lot of economic loss and take away precious human lives. Especially, the detection of smoke, which tends to be found first in fire, is of great importance. Smoke detection based on image has many difficulties in algorithm research due to the irregular shape of smoke. In this study, we introduce a new real-time smoke detection algorithm that reduces the detection of false positives generated by irregular smoke shape based on faster r-cnn of factory-installed surveillance cameras. First, we compute the global frame similarity and mean squared error (MSE) to detect the movement of smoke from the input surveillance camera. Second, we use deep learning algorithm (Faster r-cnn) to extract deferred candidate regions. Third, the extracted candidate areas for acting are finally determined using space and temporal features as smoke area. In this study, we proposed a new algorithm using the space and temporal features of global and local frames, which are well-proposed object information, to reduce false positives based on deep learning techniques. The experimental results confirmed that the proposed algorithm has excellent performance by reducing false positives of about 99.0% while maintaining smoke detection performance.

Prediction of the Number of Crimes according to Urban Environmental Factors in the Metropolitan Area (수도권 도시 환경 요인에 따른 범죄 발생 건수 예측)

  • Ye-Won Jang;Ye-Lim Kim;Si-Hyeon Park;Jae-Young Lee;Yoo-Jin Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.321-322
    • /
    • 2023
  • 본 논문에서는 Scikit-learn 패키지의 LinearRegression 모델과 Keras 딥러닝 모델을 활용하여 수도권 도시 환경 요인에 따른 범죄 발생 건수를 예측 모델을 제안한다. 연구 방법으로 범죄 발생과 유의미한 관계가 있다고 파악되는 수도권의 각 자치구 별 데이터셋을 분석하여, CCTV, 파출소, 가로등의 수가 범죄 발생에 유의미한 영향을 끼치는 것을 확인하였다. 독립 변수들 간에 Scale을 줄이고자 정규화를 진행했고, 종속변수의 정규성 확보를 위해 로그변환을 취했다. 손실 함수는 회귀문제에서 사용되는 'relu'함수를 사용했고 모델의 성능을 확인할 수 있는 지표로 MSE(Mean Squared Error)를 사용해 모델을 구성하였다. 본 논문에서 설계한 이 프로그램은 범죄 발생율이 높은 지역구에 경찰 인력의 추가적 배치, 안전 시설 확충 등 실무적 조치를 취함에 있어 근거를 제공할 수 있을 것으로 사료된다.

  • PDF

Simultaneous Motion Recognition Framework using Data Augmentation based on Muscle Activation Model (근육 활성화 모델 기반의 데이터 증강을 활용한 동시 동작 인식 프레임워크)

  • Sejin Kim;Wan Kyun Chung
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.203-212
    • /
    • 2024
  • Simultaneous motion is essential in the activities of daily living (ADL). For motion intention recognition, surface electromyogram (sEMG) and corresponding motion label is necessary. However, this process is time-consuming and it may increase the burden of the user. Therefore, we propose a simultaneous motion recognition framework using data augmentation based on muscle activation model. The model consists of multiple point sources to be optimized while the number of point sources and their initial parameters are automatically determined. From the experimental results, it is shown that the framework has generated the data which are similar to the real one. This aspect is quantified with the following two metrics: structural similarity index measure (SSIM) and mean squared error (MSE). Furthermore, with k-nearest neighbor (k-NN) or support vector machine (SVM), the classification accuracy is also enhanced with the proposed framework. From these results, it can be concluded that the generalization property of the training data is enhanced and the classification accuracy is increased accordingly. We expect that this framework reduces the burden of the user from the excessive and time-consuming data acquisition.

3D Cross-Modal Retrieval Using Noisy Center Loss and SimSiam for Small Batch Training

  • Yeon-Seung Choo;Boeun Kim;Hyun-Sik Kim;Yong-Suk Park
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.670-684
    • /
    • 2024
  • 3D Cross-Modal Retrieval (3DCMR) is a task that retrieves 3D objects regardless of modalities, such as images, meshes, and point clouds. One of the most prominent methods used for 3DCMR is the Cross-Modal Center Loss Function (CLF) which applies the conventional center loss strategy for 3D cross-modal search and retrieval. Since CLF is based on center loss, the center features in CLF are also susceptible to subtle changes in hyperparameters and external inferences. For instance, performance degradation is observed when the batch size is too small. Furthermore, the Mean Squared Error (MSE) used in CLF is unable to adapt to changes in batch size and is vulnerable to data variations that occur during actual inference due to the use of simple Euclidean distance between multi-modal features. To address the problems that arise from small batch training, we propose a Noisy Center Loss (NCL) method to estimate the optimal center features. In addition, we apply the simple Siamese representation learning method (SimSiam) during optimal center feature estimation to compare projected features, making the proposed method robust to changes in batch size and variations in data. As a result, the proposed approach demonstrates improved performance in ModelNet40 dataset compared to the conventional methods.

A Divide-Conquer U-Net Based High-Quality Ultrasound Image Reconstruction Using Paired Dataset (짝지어진 데이터셋을 이용한 분할-정복 U-net 기반 고화질 초음파 영상 복원)

  • Minha Yoo;Chi Young Ahn
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.3
    • /
    • pp.118-127
    • /
    • 2024
  • Commonly deep learning methods for enhancing the quality of medical images use unpaired dataset due to the impracticality of acquiring paired dataset through commercial imaging system. In this paper, we propose a supervised learning method to enhance the quality of ultrasound images. The U-net model is designed by incorporating a divide-and-conquer approach that divides and processes an image into four parts to overcome data shortage and shorten the learning time. The proposed model is trained using paired dataset consisting of 828 pairs of low-quality and high-quality images with a resolution of 512x512 pixels obtained by varying the number of channels for the same subject. Out of a total of 828 pairs of images, 684 pairs are used as the training dataset, while the remaining 144 pairs served as the test dataset. In the test results, the average Mean Squared Error (MSE) was reduced from 87.6884 in the low-quality images to 45.5108 in the restored images. Additionally, the average Peak Signal-to-Noise Ratio (PSNR) was improved from 28.7550 to 31.8063, and the average Structural Similarity Index (SSIM) was increased from 0.4755 to 0.8511, demonstrating significant enhancements in image quality.

New Distance Measure for Vector Quantization of Image (영상 벡터양자화를 위한 편차분산을 이용한 거리계산법)

  • Lee, Kyeong-Hwan;Choi, Jung-Hyun;Lee, Bub-Ki;Cheong, Won-Sik;Kim, Kyoung-Kyoo;Kim, Duk-Gyoo
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.89-94
    • /
    • 1999
  • In vector quantization (VQ), mean squared error (MSE) is widely used as a distance measure between vectors. But the distance between averages appears as a dominant quantity in MSE. In the case of image vectors, the coincidence of edge pattern is also important considering human visual system (HVS). Therefore, this paper presents a new distance measure using the variance of difference (VD) as a criterion for the coincidence of edge pattern. By using this in the VQ encoding, we can reduce the degradation of edge region in the reconstructed image. And applying this to the codebook design, we can obtain the final codebook that has a lot of various edge codevectors instead of redundant shade ones.

  • PDF

Naval Vessel Spare Parts Demand Forecasting Using Data Mining (데이터마이닝을 활용한 해군함정 수리부속 수요예측)

  • Yoon, Hyunmin;Kim, Suhwan
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.4
    • /
    • pp.253-259
    • /
    • 2017
  • Recent development in science and technology has modernized the weapon system of ROKN (Republic Of Korea Navy). Although the cost of purchasing, operating and maintaining the cutting-edge weapon systems has been increased significantly, the national defense expenditure is under a tight budget constraint. In order to maintain the availability of ships with low cost, we need accurate demand forecasts for spare parts. We attempted to find consumption pattern using data mining techniques. First we gathered a large amount of component consumption data through the DELIIS (Defense Logistics Intergrated Information System). Through data collection, we obtained 42 variables such as annual consumption quantity, ASL selection quantity, order-relase ratio. The objective variable is the quantity of spare parts purchased in f-year and MSE (Mean squared error) is used as the predictive power measure. To construct an optimal demand forecasting model, regression tree model, randomforest model, neural network model, and linear regression model were used as data mining techniques. The open software R was used for model construction. The results show that randomforest model is the best value of MSE. The important variables utilized in all models are consumption quantity, ASL selection quantity and order-release rate. The data related to the demand forecast of spare parts in the DELIIS was collected and the demand for the spare parts was estimated by using the data mining technique. Our approach shows improved performance in demand forecasting with higher accuracy then previous work. Also data mining can be used to identify variables that are related to demand forecasting.