• Title/Summary/Keyword: 연산 기법

Search Result 2,646, Processing Time 0.117 seconds

Path Loss Model with Multiple-Antenna (다중 안테나를 고려한 경로 손실 모델)

  • Lee, Jun-Hyun;Lee, Dong-Hyung;Keum, Hong-Sik;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.25 no.7
    • /
    • pp.747-756
    • /
    • 2014
  • In this paper, we propose a path loss model with the multiple antennas using diversity effect. Currently wireless communication systems use the multiple antennas in order to improve the channel capacity or diversity gain. However, until recently, many researches on path loss model only consider geographical environment between the transmitter and the receiver. There is no study about path loss model considering diversity effect. Nowaday wireless communication use the multiple antennas and we in common find examples using diversity scheme that is method in order to enhance a channel capacity. Moreover we anticipate that it work harder in future researches. But in this communication system, path loss model isn't established that predict strength of received signal. So, in order to predict strength of received signal, we take changing SNR by diversity gain. When exceeding the number of antennas of receiver are 7 in proposed model, diversity effect is saturated. Therefore we consider the number of antenna of receiver until 10. We find RMSE between proposed model and value of calculation is 1. We calculate the diversity gain by conventional BER curve. Proposed model can predict loss of received signal in system using multiple antennas.

An Improved Technique of Fitness Evaluation for Automated Test Data Generation (테스트 데이터 자동 생성을 위한 적합도 평가 방법의 효율성 향상 기법)

  • Lee, Sun-Yul;Choi, Hyun-Jae;Jeong, Yeon-Ji;Bae, Jung-Ho;Kim, Tae-Ho;Chae, Heung-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.882-891
    • /
    • 2010
  • Many automated dynamic test data generation technique have been proposed. The techniques evaluate fitness of test data through executing instrumented Software Under Test (SUT) and then generate new test data based on evaluated fitness values and optimization algorithms. Previous researches and experiments have been showed that these techniques generate effective test data. However, optimization algorithms in these techniques incur much time to generate test data, which results in huge test case generation cost. In this paper, we propose a technique for reducing the time of evaluating a fitness of test data among steps of dynamic test data generation methods. We introduce the concept of Fitness Evaluation Program (FEP), derived from a path constraint of SUT. We suggest a test data generation method based on FEP and implement a test generation tool, named ConGA. We also apply ConGA to generate test cases for C programs, and evaluate efficiency of the FEP-based test case generation technique. The experiments show that the proposed technique reduces 20% of test data generation time on average.

Rapid Hybrid Recommender System with Web Log for Outbound Leisure Products (웹로그를 활용한 고속 하이브리드 해외여행 상품 추천시스템)

  • Lee, Kyu Shik;Yoon, Ji Won
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.646-653
    • /
    • 2016
  • Outbound market is a rapidly growing global industry, and has evolved into a 11 trillion won trade. A lot of recommender systems, which are based on collaborative and content filtering, target the existing purchase log or rely on studies based on similarity of products. These researches are not highly efficient as data was not obtained in advance, and acquiring the overwhelming amount of data has been relatively slow. The characteristics of an outbound product are that it should be purchased at least twice in a year, and its pricing should be in the higher category. Since the repetitive purchase of a product is rare for the outbound market, the old recommender system which profiles the existing customers is lacking, and has some limitations. Therefore, due to the scarcity of data, we suggest an improved customer-profiling method using web usage mining, algorithm of association rule, and rule-based algorithm, for faster recommender system of outbound product.

A Load Balancing Method using Partition Tuning for Pipelined Multi-way Hash Join (다중 해시 조인의 파이프라인 처리에서 분할 조율을 통한 부하 균형 유지 방법)

  • Mun, Jin-Gyu;Jin, Seong-Il;Jo, Seong-Hyeon
    • Journal of KIISE:Databases
    • /
    • v.29 no.3
    • /
    • pp.180-192
    • /
    • 2002
  • We investigate the effect of the data skew of join attributes on the performance of a pipelined multi-way hash join method, and propose two new harsh join methods in the shared-nothing multiprocessor environment. The first proposed method allocates buckets statically by round-robin fashion, and the second one allocates buckets dynamically via a frequency distribution. Using harsh-based joins, multiple joins can be pipelined to that the early results from a join, before the whole join is completed, are sent to the next join processing without staying in disks. Shared nothing multiprocessor architecture is known to be more scalable to support very large databases. However, this hardware structure is very sensitive to the data skew. Unless the pipelining execution of multiple hash joins includes some dynamic load balancing mechanism, the skew effect can severely deteriorate the system performance. In this parer, we derive an execution model of the pipeline segment and a cost model, and develop a simulator for the study. As shown by our simulation with a wide range of parameters, join selectivities and sizes of relations deteriorate the system performance as the degree of data skew is larger. But the proposed method using a large number of buckets and a tuning technique can offer substantial robustness against a wide range of skew conditions.

Transform Skip Mode Fast Decision Method for HEVC Encoding (HEVC 부호화를 위한 변환생략 모드 고속 선택 방법)

  • Yang, Seungha;Shim, Hiuk Jae;Lee, Dahee;Jeon, Byeungwoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.4
    • /
    • pp.172-179
    • /
    • 2014
  • HEVC (High Efficiency Video Coding) fine-tuned many existing coding tools and adopted also many new coding techniques. As a result, HEVC has accomplished about 2 times of compression efficiency enhancement compared to the existing video coding standard of H.264/AVC. One of the newly adopted tools in HEVC is the transform skip scheme which performs quantization without transform. This technique improves coding efficiency especially with computer-generated images. However, the unavailability of global or local properties of general video signals demands encoder to decide whether performing transform or not for each TU (Transform Unit). The necessity of computing rate-distortion costs for this decision is one reason to increase encoder complexity. In this paper, a fast transform skip mode decision method is proposed, which is based on the fast decision of rate-distortion cost calculation for transform skip mode, by considering frequency characteristics of residual signal. The proposed method can reduce $4{\times}4$ TU encoding time by about 27.1% with only about 0.03% consequential decrement in BDBR.

A Non-consecutive Cloth Draping Simulation Algorithm using Conjugate Harmonic Functions (켤레조화함수를 이용한 비순차적 의류 주름 모사 알고리즘)

  • Kang Moon Koo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.181-191
    • /
    • 2005
  • This article describes a simplified mathematical model and the relevant numerical algorithm to simulate the draped cloth on virtual human body. The proposed algorithm incorporates an elliptical, or non-consecutive, method to simulate the cloth wrinkles on moving bodies without resorting to the result of the past time-steps of drape simulation. A global-local analysis technique was employed to decompose the drape of cloths into the global deformation and the local wrinkles that will be superposed linearly The global deformation is determined directly by the rotation and the translation of body parts to generate a wrinkle-free yet globally deformed shape of cloth. The local wrinkles are calculated by solving simple elliptical equations based on the orthogonality between conjugate harmonic functions representing the wrinkle amplitude and the direction of wrinkles. The proposed method requires no interpolative time frames even for discontinuous body postures. Standing away from the incremental approach of time integration in conventional methods, the proposed method yields a remarkable reduction of CPU time and an enhanced stability. Also, the transient motion of cloth could be achieved by interpolating between the deformations corresponding to each static posture.

The Impact of the PCA Dimensionality Reduction for CNN based Hyperspectral Image Classification (CNN 기반 초분광 영상 분류를 위한 PCA 차원축소의 영향 분석)

  • Kwak, Taehong;Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.959-971
    • /
    • 2019
  • CNN (Convolutional Neural Network) is one representative deep learning algorithm, which can extract high-level spatial and spectral features, and has been applied for hyperspectral image classification. However, one significant drawback behind the application of CNNs in hyperspectral images is the high dimensionality of the data, which increases the training time and processing complexity. To address this problem, several CNN based hyperspectral image classification studies have exploited PCA (Principal Component Analysis) for dimensionality reduction. One limitation to this is that the spectral information of the original image can be lost through PCA. Although it is clear that the use of PCA affects the accuracy and the CNN training time, the impact of PCA for CNN based hyperspectral image classification has been understudied. The purpose of this study is to analyze the quantitative effect of PCA in CNN for hyperspectral image classification. The hyperspectral images were first transformed through PCA and applied into the CNN model by varying the size of the reduced dimensionality. In addition, 2D-CNN and 3D-CNN frameworks were applied to analyze the sensitivity of the PCA with respect to the convolution kernel in the model. Experimental results were evaluated based on classification accuracy, learning time, variance ratio, and training process. The size of the reduced dimensionality was the most efficient when the explained variance ratio recorded 99.7%~99.8%. Since the 3D kernel had higher classification accuracy in the original-CNN than the PCA-CNN in comparison to the 2D-CNN, the results revealed that the dimensionality reduction was relatively less effective in 3D kernel.

Development of GPS Multipath Error Reduction Method Based on Image Processing in Urban Area (디지털 영상을 활용한 도심지 내 GPS 다중경로오차 경감 방법 개발)

  • Yoon, Sung Joo;Kim, Tae Jung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.2
    • /
    • pp.105-112
    • /
    • 2018
  • To determine the position of receiver, the GPS (Global Positioning System) uses position information of satellites and pseudo ranges based on signals. These are reflected by surrounding structures and multipath errors occur. This paper proposes a method for multipath error reduction using digital images to enhance the accuracy. The goal of the study is to calculate the shielding environment of receiver using image processing and apply it to GPS positioning. The proposed method, firstly, performs a preprocessing to reduce the effect of noise on images. Next, it uses hough transform to detect the outline of building roofs and determines mask angles and permissible azimuth range. Then, it classifies the satellites according to the condition using the image processing results. Finally, base on point positioning, it computes the receiver position by applying a weight model that assigns different weights to the classified satellites. We confirmed that the RMSE (Root Mean Square Error) was reduced by 2.29m in the horizontal direction and by 15.62m in the vertical direction. This paper showed the potential for the hybrid of GPS positioning and image processing technology.

3D Modeling Approaches in Estimation of Resource and Production of Musan Iron Mine, North Korea (3차원 모델링을 활용한 북한 무산광산일대의 자원량 및 생산량 추정)

  • Bae, Sungji;Yu, Jaehyung;Koh, Sang-Mo;Heo, Chul-Ho
    • Economic and Environmental Geology
    • /
    • v.48 no.5
    • /
    • pp.391-400
    • /
    • 2015
  • Korea is a global steel producer and a major consumer while iron ore producing is very low compared to the demand. On the other hand, North Korea holds tremendous amount of iron reserves and, however, its producing rate is limited. Moreover, the data regarding mineral resources of North Korea is very limited and uncertain because of political isolation. This study estimated the amount of iron ore resource and production amount for the Musan Iron mine, the world-known open-pit mine of North Korea, using satellite imagery(Landsat MSS, ASTER) and digital maps between 1976 to 2007. As a result, the mining area of Musan mine was increased by $6.1km^2$ during the 30 years and the mining sector was estimated as $4.9km^2$. We estimated the iron resources and production amount of 0.7 and 0.2 billion metric tons, respectively based on 3D modeling and average iron ore density of Anshan formation in China. This amount indicates 8.1 million tons of annual average production and it coincides well with previous reports. We expect this study would be utilized significantly on inter-Korean exchange programs by providing trustable preliminary data.

Stereo Image-based 3D Modelling Algorithm through Efficient Extraction of Depth Feature (효율적인 깊이 특징 추출을 이용한 스테레오 영상 기반의 3차원 모델링 기법)

  • Ha, Young-Su;Lee, Heng-Suk;Han, Kyu-Phil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.10
    • /
    • pp.520-529
    • /
    • 2005
  • A feature-based 3D modeling algorithm is presented in this paper. Since conventional methods use depth-based techniques, they need much time for the image matching to extract depth information. Even feature-based methods have less computation load than that of depth-based ones, the calculation of modeling error about whole pixels within a triangle is needed in feature-based algorithms. It also increase the computation time. Therefore, the proposed algorithm consists of three phases, which are an initial 3D model generation, model evaluation, and model refinement phases, in order to acquire an efficient 3D model. Intensity gradients and incremental Delaunay triangulation are used in the Initial model generation. In this phase, a morphological edge operator is adopted for a fast edge filtering, and the incremental Delaunay triangulation is modified to decrease the computation time by avoiding the calculation errors of whole pixels and selecting a vertex at the near of the centroid within the previous triangle. After the model generation, sparse vertices are matched, then the faces are evaluated with the size, approximation error, and disparity fluctuation of the face in evaluation stage. Thereafter, the faces which have a large error are selectively refined into smaller faces. Experimental results showed that the proposed algorithm could acquire an adaptive model with less modeling errors for both smooth and abrupt areas and could remarkably reduce the model acquisition time.