• Title/Summary/Keyword: Map Size

Search Result 763, Processing Time 0.024 seconds

Interference Mitigating Power Allocation Scheme for DL-MAP Information in IEEE802.16e-based Multi-cell OFDMA Systems (IEEE802.16e 기반 다중셀 OFDMA시스템에서의 하향링크 MAP정보에 대한 간섭최소화 전력할당기법)

  • Seo, Jeong-Yeon;Kang, Ji-Won;Lee, Chung-Yong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.275-276
    • /
    • 2008
  • IEEE802.16e-based OFDMA system called WiBro is being serviced commercially. In WiBro system, the base station sends downlink(DL)-MAP information to all mobile stations in each cell. The DL-MAP information is repeated six times, modulated by QPSK, and coded by Convolutional Turbo Coding(CTC) with 1/2 code rate [1],[2]. As the number of mobile stations increases, the DL-MAP size also increases. In this paper, We investigate methods of power allocation and interference cancelation to reduce overhead of the DL-MAP.

  • PDF

Parallax Map Preprocessing Algorithm for Performance Improvement of Hole-Filling (홀 채우기의 성능 개선을 위한 시차지도의 전처리 알고리즘)

  • Kim, Jun-Ho;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.10
    • /
    • pp.62-70
    • /
    • 2013
  • DIBR(Depth Image Based Rendering) is a kind of view synthesis algorithm to generate images at free view points from the reference color image and its depth map. One of the main challenges of DIBR is the occurrence of holes that correspond to uncovered backgrounds at the synthesized view. In order to cover holes efficiently, two main approaches have been actively investigated. One is to develop preprocessing algorithms for depth maps or parallax maps to reduce the size of possible holes, and the other is to develop hole filling methods to fill the generated holes using adjacent pixels in non-hole areas. Most conventional preprocessing algorithms for reducing the size of holes are based on the smoothing process of depth map. Filtering of depth map, however, attenuates the resolution of depth map and generates geometric distortions. In this paper, we proposes a novel preprocessing algorithm for parallax map to improve the performance of hole-filling by avoiding the drawbacks of conventional methods.

ON A GENERALIZED APERIODIC PERFECT MAP

  • KIM, SANG-MOK
    • Communications of the Korean Mathematical Society
    • /
    • v.20 no.4
    • /
    • pp.685-693
    • /
    • 2005
  • An aperiodic perfect map(APM) is an array with the property that every array of certain size, called a window, arises exactly once as a contiguous subarray in the array. In this article, we deal with the generalization of APM in higher dimensional arrays. First, we reframe all known definitions onto the generalized n-dimensional arrays. Next, some elementary known results on arrays are generalized to propositions on n-dimensional arrays. Finally, with some devised integer representations, two constructions of infinite family of n-dimensional APMs are generalized from known 2-dimensional constructions in [7].

Leveled Spatial Indexing Technique supporting Map Generalization (지도 일반화를 지원하는 계층화된 공간 색인 기법)

  • Lee, Ki-Jung;WhangBo, Taeg-Keun;Yang, Young-Kyu
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.2 s.12
    • /
    • pp.15-22
    • /
    • 2004
  • Map services for cellular phone have problem for implementation, which are the limitation of a screen size. To effectively represent map data on screen of celluar phone, it need a process which translate a detailed map data into less detailed data using map generalization, and it should manipulate zoom in out quickly by leveling the generalized data. However, current spatial indexing methods supporting map generalization do not support all map generalization operations. In this paper, We propose a leveled spatial indexing method, LMG-tree, supporting map generalization and presents the results of performance evaluation.

  • PDF

FPGA Design of Turbo Code based on MAP (MAP 기반 터보코드의 FPGA 설계)

  • Seo, Young-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.3C
    • /
    • pp.306-313
    • /
    • 2007
  • In this paper, we efficiently implemented turbo code algorithm in FPGA H/W(hardware) resource. The used turbo code algorithm has the characteristics; the size of constraint is 3, encoder type is 1/3, the size of random interleaver is 2048. The proposed H/W consists of MAP block for calculating alpha and delta using delta value, storing buffer for each value, multiplier for calculating lamda, and lamda buffer. The proposed algorithm and H/W architecture was verified by C++ language and was designed by VHDL. Finally the designed H/W was programmed into FPGA and tested in wireless communication environment for field availability. The target FPGA of the implemented H/W is VERTEX4 XC4VFX12-12-SF363 and it is stably operated in 131.533MHz clock frequency (7.603ns).

Toward Occlusion-Free Depth Estimation for Video Production

  • Park, Jong-Il;Seiki-Inoue
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.131-136
    • /
    • 1997
  • We present a method to estimate a dense and sharp depth map using multiple cameras for the application to flexible video production. A key issue for obtaining sharp depth map is how to overcome the harmful influence of occlusion. Thus, we first propose to selectively use the depth information from multiple cameras. With a simple sort and discard technique, we resolve the occlusion problem considerably at a slight sacrifice of noise tolerance. However, boundary overreach of more textured area to less textured area at object boundaries still remains to be solved. We observed that the amount of boundary overreach is less than half the size of the matching window and, unlike usual stereo matching, the boundary overreach with the proposed occlusion-overcoming method shows very abrupt transition. Based on these observations, we propose a hierarchical estimation scheme that attempts to reduce boundary overreach such that edges of the depth map coincide with object boundaries on the one hand, and to reduce noisy estimates due to insufficient size of matching window on the other hand. We show the hierarchical method can produce a sharp depth map for a variety of images.

  • PDF

Atomisation and vacuum drying studies on Malaysian honey encapsulation

  • Nurul Aisyah Rosli;Boon-Beng Lee;Khairul Farihan Kasim;Che Wan Sharifah Robiah Mohamad
    • Food Science and Preservation
    • /
    • v.30 no.4
    • /
    • pp.589-601
    • /
    • 2023
  • Malaysian honey is rich in nutrients and bioactive compounds, which can be a healthy alternative to refined sugar in food production. However, liquid honey's viscous and sticky nature makes it unpreferable in industrial handling. This study, an atomization system coupled with vacuum drying to produce honey powders to overcome the problem. Three types of Malaysian honey, namely Acacia, Gelam, and Tualang, were encapsulated in Ca-alginate gel beads using the atomization system. The density viscosity, and surface tension of the honey-alginate solutions were measured, and the concentration of honey and alginate influenced the physical properties of the solutions. Honey-encapsulated gel beads in the size range of 2.16-2.92 mm were produced using the atomization system with the air-liquid mass flow rate ratios of 0.22-0.31, Weber number (We) of 112-545, and Ohnersorges number (Oh) of 0.35-10.46. Gel bead diameter can be predicted using a simple mathematical model. After vacuum drying, the honey gel powder produced was in the size range of 1.50-1.79 mm. Results showed that honey gel powders with good encapsulation efficiency and high honey loading could be produced using the atomization system and vacuum drying.

Mapping Particle Size Distributions into Predictions of Properties for Powder Metal Compacts

  • German, Randall M.
    • Proceedings of the Korean Powder Metallurgy Institute Conference
    • /
    • 2006.09b
    • /
    • pp.704-705
    • /
    • 2006
  • Discrete element analysis is used to map various log-normal particle size distributions into measures of the in-sphere pore size distribution. Combinations evaluated range from monosized spheres to include bimodal mixtures and various log-normal distributions. The latter proves most useful in providing a mapping of one distribution into the other (knowing the particle size distribution we want to predict the pore size distribution). Such metrics show predictions where the presence of large pores is anticipated that need to be avoided to ensure high sintered properties.

  • PDF

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

Performance Analysis of Error Correction Codes for 3GPP Standard (3GPP 규격 오류 정정 부호 기법의 성능 평가)

  • 신나나;이창우
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.15 no.1
    • /
    • pp.81-88
    • /
    • 2004
  • Turbo code has been adopted in the 3GPP standard, since its performance is very close to the Shannon limit. However, the turbo decoder requires a lot of computations and the amount of the memory increases as the block size of turbo codes becomes larger. In order to reduce the complexity of the turbo decoder, the Log-MAP, the Max-Log-MAP and the sliding window algorithm have been proposed. In this paper, the performance of turbo codes adopted in the 3GPP standard is analyzed by using the floating point and the fixed point implementation. The efficient decoding method is also proposed. It is shown that the BER performance of the proposed method is close to that of the Log-MAP algorithm.