• Title/Summary/Keyword: sparse data matrix

Search Result 70, Processing Time 0.025 seconds

Efficient calculation method of derivative of traveltime using SWEET algorithm for refraction tomography

  • Choi, Yun-Seok;Shin, Chang-Soo
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.402-409
    • /
    • 2003
  • Inversion of traveltime requires an efficient algorithm for computing the traveltime as well as its $Frech\hat{e}t$ derivative. We compute the traveltime of the head waves using the damped wave solution in the Laplace domain and then present a new algorithm for calculating the $Frech\hat{e}t$ derivative of the head wave traveltimes by exploiting the numerical structure of the finite element method, the modem sparse matrix technology, and SWEET algorithm developed recently. Then, we use a properly regularized steepest descent method to invert the traveltime of the Marmousi-2 model. Through our numerical tests, we will demonstrate that the refraction tomography with large aperture data can be used to construct the initial velocity model for the prestack depth migration.

  • PDF

Analysis of 3D Microwave Oven Using Finite Element Method (전자렌지 캐비티의 전자파 해석)

  • Park, Kweong-Soo;Kim, Gweon-Jib;Shon, Jong-Chull;Kim, Sang-Gweon;Park, Yoon-Ser
    • Proceedings of the KIEE Conference
    • /
    • 1996.07c
    • /
    • pp.1753-1755
    • /
    • 1996
  • This paper presents an analysis of the 3D microwave oven considering its forming. The results were compared with experimental data. Finite Element Method(FEM) using edge clement is employed for the analysis. For solving the large sparse system matrix equation was solved using the parallelized QMR method. Analysis of the 3d cavity has troublesome difficulties such as spurious solutions, too many memory and long computation time. We overcome this difficulties by using edge clement for spurious solutions and the parallelized QMR method by the aid of Paralle Virtual Machine(PVM) for the memory and computation time.

  • PDF

Reduced Complexity Signal Detection for OFDM Systems with Transmit Diversity

  • Kim, Jae-Kwon;Heath Jr. Robert W.;Powers Edward J.
    • Journal of Communications and Networks
    • /
    • v.9 no.1
    • /
    • pp.75-83
    • /
    • 2007
  • Orthogonal frequency division multiplexing (OFDM) systems with multiple transmit antennas can exploit space-time block coding on each subchannel for reliable data transmission. Spacetime coded OFDM systems, however, are very sensitive to time variant channels because the channels need to be static over multiple OFDM symbol periods. In this paper, we propose to mitigate the channel variations in the frequency domain using a linear filter in the frequency domain that exploits the sparse structure of the system matrix in the frequency domain. Our approach has reduced complexity compared with alternative approaches based on time domain block-linear filters. Simulation results demonstrate that our proposed frequency domain block-linear filter reduces computational complexity by more than a factor of ten at the cost of small performance degradation, compared with a time domain block-linear filter.

Optical Misalignment Cancellation via Online L1 Optimization (온라인 L1 최적화를 통한 탐색기 비정렬 효과 제거 기법)

  • Kim, Jong-Han;Han, Yudeog;Whang, Ick Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1078-1082
    • /
    • 2017
  • This paper presents an L1 optimization based filtering technique which effectively eliminates the optical misalignment effects encountered in the squint guidance mode with strapdown seekers. We formulated a series of L1 optimization problems in order to separate the bias and the gradient components from the measured data, and solved them via the alternating direction method of multipliers (ADMM) and sparse matrix decomposition techniques. The proposed technique was able to rapidly detect arbitrary discontinuities and gradient changes from the measured signals, and was shown to effectively cancel the undesirable effects coming from the seeker misalignment angles. The technique was implemented on embedded flight computers and the real-time operational performance was verified via the hardware-in-the-loop simulation (HILS) tests in parallel with the automatic target recognition algorithms and the intra-red synthetic target images.

A Parallel Algorithm for Large DOF Structural Analysis Problems (대규모 자유도 문제의 구조해석을 위한 병렬 알고리즘)

  • Kim, Min-Seok;Lee, Jee-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.23 no.5
    • /
    • pp.475-482
    • /
    • 2010
  • In this paper, an efficient two-level parallel domain decomposition algorithm is suggested to solve large-DOF structural problems. Each subdomain is composed of the coarse problem and local problem. In the coarse problem, displacements at coarse nodes are computed by the iterative method that does not need to assemble a stiffness matrix for the whole coarse problem. Then displacements at local nodes are computed by Multi-Frontal Sparse Solver. A parallel version of PCG(Preconditioned Conjugate Gradient Method) is developed to solve the coarse problem iteratively, which minimizes the data communication amount between processors to increase the possible problem DOF size while maintaining the computational efficiency. The test results show that the suggested algorithm provides scalability on computing performance and an efficient approach to solve large-DOF structural problems.

Fast Cardiac CINE MRI by Iterative Truncation of Small Transformed Coefficients

  • Park, Jinho;Hong, Hye-Jin;Yang, Young-Joong;Ahn, Chang-Beom
    • Investigative Magnetic Resonance Imaging
    • /
    • v.19 no.1
    • /
    • pp.19-30
    • /
    • 2015
  • Purpose: A new compressed sensing technique by iterative truncation of small transformed coefficients (ITSC) is proposed for fast cardiac CINE MRI. Materials and Methods: The proposed reconstruction is composed of two processes: truncation of the small transformed coefficients in the r-f domain, and restoration of the measured data in the k-t domain. The two processes are sequentially applied iteratively until the reconstructed images converge, with the assumption that the cardiac CINE images are inherently sparse in the r-f domain. A novel sampling strategy to reduce the normalized mean square error of the reconstructed images is proposed. Results: The technique shows the least normalized mean square error among the four methods under comparison (zero filling, view sharing, k-t FOCUSS, and ITSC). Application of ITSC for multi-slice cardiac CINE imaging was tested with the number of slices of 2 to 8 in a single breath-hold, to demonstrate the clinical usefulness of the technique. Conclusion: Reconstructed images with the compression factors of 3-4 appear very close to the images without compression. Furthermore the proposed algorithm is computationally efficient and is stable without using matrix inversion during the reconstruction.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

Digital Signage service through Customer Behavior pattern analysis

  • Shin, Min-Chan;Park, Jun-Hee;Lee, Ji-Hoon;Moon, Nammee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.53-62
    • /
    • 2020
  • Product recommendation services that have been researched recently are only recommended through the customer's product purchase history. In this paper, we propose the digital signage service through customers' behavior pattern analysis that is recommending through not only purchase history, but also behavior pattern that customers take when choosing products. This service analyzes customer behavior patterns and extracts interests about products that are of practical interest. The service is learning extracted interest rate and customers' purchase history through the Wide & Deep model. Based on this learning method, the sparse vector of other products is predicted through the MF(Matrix Factorization). After derive the ranking of predicted product interest rate, this service uses the indoor signage that can interact with customers to expose the suitable advertisements. Through this proposed service, not only online, but also in an offline environment, it would be possible to grasp customers' interest information. Also, it will create a satisfactory purchasing environment by providing suitable advertisements to customers, not advertisements that advertisers randomly expose.

PARAFAC Tensor Reconstruction for Recommender System based on Apache Spark (아파치 스파크에서의 PARAFAC 분해 기반 텐서 재구성을 이용한 추천 시스템)

  • Im, Eo-Jin;Yong, Hwan-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.443-454
    • /
    • 2019
  • In recent years, there has been active research on a recommender system that considers three or more inputs in addition to users and goods, making it a multi-dimensional array, also known as a tensor. The main issue with using tensor is that there are a lot of missing values, making it sparse. In order to solve this, the tensor can be shrunk using the tensor decomposition algorithm into a lower dimensional array called a factor matrix. Then, the tensor is reconstructed by calculating factor matrices to fill original empty cells with predicted values. This is called tensor reconstruction. In this paper, we propose a user-based Top-K recommender system by normalized PARAFAC tensor reconstruction. This method involves factorization of a tensor into factor matrices and reconstructs the tensor again. Before decomposition, the original tensor is normalized based on each dimension to reduce overfitting. Using the real world dataset, this paper shows the processing of a large amount of data and implements a recommender system based on Apache Spark. In addition, this study has confirmed that the recommender performance is improved through normalization of the tensor.

GAIN-QoS: A Novel QoS Prediction Model for Edge Computing

  • Jiwon Choi;Jaewook Lee;Duksan Ryu;Suntae Kim;Jongmoon Baik
    • Journal of Web Engineering
    • /
    • v.21 no.1
    • /
    • pp.27-52
    • /
    • 2021
  • With recent increases in the number of network-connected devices, the number of edge computing services that provide similar functions has increased. Therefore, it is important to recommend an optimal edge computing service, based on quality-of-service (QoS). However, in the real world, there is a cold-start problem in QoS data: highly sparse invocation. Therefore, it is difficult to recommend a suitable service to the user. Deep learning techniques were applied to address this problem, or context information was used to extract deep features between users and services. However, edge computing environment has not been considered in previous studies. Our goal is to predict the QoS values in real edge computing environments with improved accuracy. To this end, we propose a GAIN-QoS technique. It clusters services based on their location information, calculates the distance between services and users in each cluster, and brings the QoS values of users within a certain distance. We apply a Generative Adversarial Imputation Nets (GAIN) model and perform QoS prediction based on this reconstructed user service invocation matrix. When the density is low, GAIN-QoS shows superior performance to other techniques. In addition, the distance between the service and user slightly affects performance. Thus, compared to other methods, the proposed method can significantly improve the accuracy of QoS prediction for edge computing, which suffers from cold-start problem.