• Title/Summary/Keyword: Mapping Technique

Search Result 857, Processing Time 0.024 seconds

GPU-only Terrain Rendering for Walk-through (Walk-through를 지원하는 GPU 기반 지형렌더링)

  • Park, Sun-Yong;Oh, Kyoung-Su;Cho, Sung-Hyun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.71-80
    • /
    • 2007
  • In this paper, we introduce an efficient GPU-based real-time rendering technique applicable to every kind of game. Our method, without an extra geometry, can represent terrain just with a height map. It makes it possible to freely go around in the air or on the surface, so we can directly apply it to any computer games as well as a virtual reality. Since our method is not based on any geometrical structure, it doesn't need special LOD policy and the precision of geometrical representation and visual quality absolutely depend on the resolution of height map and color map. Moreover, GPU-only technique allows the general CPU to be dedicated to more general work, and as a result, enhances the overall performance of the computer. To date, there have been many researches related to the terrain representation, but most of them rely on CPU or confmed its applications to flight simulation, Improving existing displacement mapping techniques and applying it to our terrain rendering, we completely ruled out the problems, such as cracking, poping etc, which cause in polygon-based techniques, The most important contributions are to efficiently deal with arbitrary LOS(Line Of Sight) and dramatically improve visual quality during walk-through by reconstructing a height field with curved patches. We suggest a simple and useful method for calculating ray-patch intersections. We implemented all these on GPU 100%, and got tens to hundreds of framerates with height maps a variety of resolutions$(256{\times}256\;to\;4096{\times}4096)$.

  • PDF

Application and perspectives of proteomics in crop science fields (작물학 분야 프로테오믹스의 응용과 전망)

  • Woo Sun-Hee
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2004.04a
    • /
    • pp.12-27
    • /
    • 2004
  • Thanks to spectacular advances in the techniques for identifying proteins separated by two-dimensional electrophoresis and in methods for large-scale analysis of proteome variations, proteomics is becoming an essential methodology in various fields of plant sciences. Plant proteomics would be most useful when combined with other functional genomics tools and approaches. A combination of microarray and proteomics analysis will indicate whether gene regulation is controlled at the level of transcription or translation and protein accumulation. In this review, we described the catalogues of the rice proteome which were constructed in our program, and functional characterization of some of these proteins was discussed. Mass-spectrometry is a most prevalent technique to identify rapidly a large of proteins in proteome analysis. However, the conventional Western blotting/sequencing technique us still used in many laboratories. As a first step to efficiently construct protein data-file in proteome analysis of major cereals, we have analyzed the N-terminal sequences of 100 rice embryo proteins and 70 wheat spike proteins separated by two-dimensional electrophoresis. Edman degradation revealed the N-terminal peptide sequences of only 31 rice proteins and 47 wheat proteins, suggesting that the rest of separated protein spots are N-terminally blocked. To efficiently determine the internal sequence of blocked proteins, we have developed a modified Cleveland peptide mapping method. Using this above method, the internal sequences of all blocked rice proteins (i. e., 69 proteins) were determined. Among these 100 rice proteins, thirty were proteins for which homologous sequence in the rice genome database could be identified. However, the rest of the proteins lacked homologous proteins. This appears to be consistent with the fact that about 30% of total rice cDNA have been deposited in the database. Also, the major proteins involved in the growth and development of rice can be identified using the proteome approach. Some of these proteins, including a calcium-binding protein that fumed out to be calreticulin, gibberellin-binding protein, which is ribulose-1,5-bisphosphate carboxylase/oxygenase activate in rice, and leginsulin-binding protein in soybean have functions in the signal transduction pathway. Proteomics is well suited not only to determine interaction between pairs of proteins, but also to identify multisubunit complexes. Currently, a protein-protein interaction database for plant proteins (http://genome .c .kanazawa-u.ac.jp/Y2H)could be a very useful tool for the plant research community. Recently, we are separated proteins from grain filling and seed maturation in rice to perform ESI-Q-TOF/MS and MALDI-TOF/MS. This experiment shows a possibility to easily and rapidly identify a number of 2-DE separated proteins of rice by ESI-Q-TOF/MS and MALDI-TOF/MS. Therefore, the Information thus obtained from the plant proteome would be helpful in predicting the function of the unknown proteins and would be useful in the plant molecular breeding. Also, information from our study could provide a venue to plant breeder and molecular biologist to design their research strategies precisely.

  • PDF

Application of Multispectral Remotely Sensed Imagery for the Characterization of Complex Coastal Wetland Ecosystems of southern India: A Special Emphasis on Comparing Soft and Hard Classification Methods

  • Shanmugam, Palanisamy;Ahn, Yu-Hwan;Sanjeevi , Shanmugam
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.189-211
    • /
    • 2005
  • This paper makes an effort to compare the recently evolved soft classification method based on Linear Spectral Mixture Modeling (LSMM) with the traditional hard classification methods based on Iterative Self-Organizing Data Analysis (ISODATA) and Maximum Likelihood Classification (MLC) algorithms in order to achieve appropriate results for mapping, monitoring and preserving valuable coastal wetland ecosystems of southern India using Indian Remote Sensing Satellite (IRS) 1C/1D LISS-III and Landsat-5 Thematic Mapper image data. ISODATA and MLC methods were attempted on these satellite image data to produce maps of 5, 10, 15 and 20 wetland classes for each of three contrast coastal wetland sites, Pitchavaram, Vedaranniyam and Rameswaram. The accuracy of the derived classes was assessed with the simplest descriptive statistic technique called overall accuracy and a discrete multivariate technique called KAPPA accuracy. ISODATA classification resulted in maps with poor accuracy compared to MLC classification that produced maps with improved accuracy. However, there was a systematic decrease in overall accuracy and KAPPA accuracy, when more number of classes was derived from IRS-1C/1D and Landsat-5 TM imagery by ISODATA and MLC. There were two principal factors for the decreased classification accuracy, namely spectral overlapping/confusion and inadequate spatial resolution of the sensors. Compared to the former, the limited instantaneous field of view (IFOV) of these sensors caused occurrence of number of mixture pixels (mixels) in the image and its effect on the classification process was a major problem to deriving accurate wetland cover types, in spite of the increasing spatial resolution of new generation Earth Observation Sensors (EOS). In order to improve the classification accuracy, a soft classification method based on Linear Spectral Mixture Modeling (LSMM) was described to calculate the spectral mixture and classify IRS-1C/1D LISS-III and Landsat-5 TM Imagery. This method considered number of reflectance end-members that form the scene spectra, followed by the determination of their nature and finally the decomposition of the spectra into their endmembers. To evaluate the LSMM areal estimates, resulted fractional end-members were compared with normalized difference vegetation index (NDVI), ground truth data, as well as those estimates derived from the traditional hard classifier (MLC). The findings revealed that NDVI values and vegetation fractions were positively correlated ($r^2$= 0.96, 0.95 and 0.92 for Rameswaram, Vedaranniyam and Pitchavaram respectively) and NDVI and soil fraction values were negatively correlated ($r^2$ =0.53, 0.39 and 0.13), indicating the reliability of the sub-pixel classification. Comparing with ground truth data, the precision of LSMM for deriving moisture fraction was 92% and 96% for soil fraction. The LSMM in general would seem well suited to locating small wetland habitats which occurred as sub-pixel inclusions, and to representing continuous gradations between different habitat types.

Study on the Retrieval of Vertical Air Motion from the Surface-Based and Airborne Cloud Radar (구름레이더를 이용한 대기 공기의 연직속도 추정연구)

  • Jung, Eunsil
    • Atmosphere
    • /
    • v.29 no.1
    • /
    • pp.105-112
    • /
    • 2019
  • Measurements of vertical air motion and microphysics are essential for improving our understanding of convective clouds. In this paper, the author reviews the current research on the retrieval of vertical air motions using the cloud radar. At radar wavelengths of 3 mm (W-band radar; 94-GHz radar; cloud radar), the raindrop backscattering cross-section (${\sigma}b$) varies between successive maxima and minima as a function of the raindrop diameter (D) that are well described by Mie theory. The first Mie minimum in the backscattering cross-section occurs at D~1.68 mm, which translates to a raindrop terminal fall velocity of ${\sim}5.85m\;s^{-1}$ based on the Gunn and Kinzer relationship. Since raindrop diameters often exceed this size, the signal is captured in the radar Doppler spectrum, and thus, the location of the first Mie minimum can be used as a reference for retrieving the vertical air motion. The Mie technique is applied to radar Doppler spectra from the surface-based and airborne, upward pointing W-band radars. The contributions of aircraft motion to the vertical air motion are also described and further the first-order aircraft motion corrected equation is presented. The review also shows that the separate spectral peaks due to the cloud droplets can provide independent validation of the Mie technique retrieved vertical air motion using the cloud droplets as a tracer of vertical air motion.

MR T2 Map Technique: How to Assess Changes in Cartilage of Patients with Osteoarthritis of the Knee (MR T2 Map 기법을 이용한 슬관절염 환자의 연골 변화 평가)

  • Cho, Jae-Hwan;Park, Cheol-Soo;Lee, Sun-Yeob;Kim, Bo-Hui
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.298-307
    • /
    • 2009
  • By using the MR T2 map technique, this study intends, first, to measure the change of T2 values of cartilage between healthy people and patients with osteoarthritis and, second, to assess the form and the damage of cartilage in the knee-joint, through which this study would consider the utility of the T2 map technique. Thirty healthy people were selected based on their clinical history and current status and another thirty patients with osteoarthritis of the knee who were screened by simple X-ray from November 2007 to December 2008 were selected. Their T2 Spin Echo (SE hereafter) images for the cartilage of the knee joint were collected by using the T2 SE sequence, one of the multi-echo methods (TR: 1,000 ms; TE values: 6.5, 13, 19.5, 26, 32.5. 40, 45.5, 52). Based on these images, the changes in the signal intensity (SI hereafter) for each section of the cartilage of the knee joint were measured, which yielded average values of T2 through the Origin 7.0 Professional (Northampton, MA 01060 USA). With these T2s, the independent samples T-test was performed by SPSS Window version 12.0 to run the quantitative analysis and to test the statistical significance between the healthy group and the patient group. Closely looking at T2 values for each anterior and lateral articular cartilage of the sagittal plane and the coronal plane, in the sagittal plane, the average T2 of the femoral cartilage in the patient group with arthritis of the knee ($42.22{\pm}2.91$) was higher than the average T2 of the healthy group ($36.26{\pm}5.01$). Also, the average T2 of the tibial cartilage in the patient group ($43.83{\pm}1.43$) was higher than the average T2 in the healthy group ($36.45{\pm}3.15$). In the case of the coronal plane, the average T2 of the medial femoral cartilage in the patient group ($45.65{\pm}7.10$) was higher than the healthy group ($36.49{\pm}8.41$) and so did the average T2 of the anterior tibial cartilage (i.e., $44.46{\pm}3.44$ for the patient group vs. $37.61{\pm}1.97$ for the healthy group). As for the lateral femoral cartilage in the coronal plane, the patient group displayed the higher T2 ($43.41{\pm}4.99$) than the healthy group did ($37.64{\pm}4.02$) and this tendency was similar in the lateral tibial cartilage (i.e., $43.78{\pm}8.08$ for the patient group vs. $36.62{\pm}7.81$ for the healthy group). Along with the morphological MR imaging technique previously used, the T2 map technique seems to help patients with cartilage problems, in particular, those with the arthritis of the knee for early diagnosis by quantitatively analyzing the structural and functional changes of the cartilage.

  • PDF

Isolation and characterization of sigH from Corynebacterium glutamicum (Corynebacterium glutamicum의 sigH 유전자의 분리 및 기능분석)

  • Kim Tae-Hyun;Kim Hyung-Joon;Park Joon-Sung;Kim Younhee;Lee Heung-Shick
    • Korean Journal of Microbiology
    • /
    • v.41 no.2
    • /
    • pp.99-104
    • /
    • 2005
  • Corynebacterial clones which exert regulatory effects on the expression of the glyoxylate bypass genes were isolated using a reporter plasmid carrying the enteric lacZ fused to the aceB promoter of Corynebacterium glutamicum. Some clones carried common fragments as turned out by DNA mapping technique. Subcloning analysis followed by the measurement of $\beta-galactosidase$ activity in Escherichia coli identified the region responsible for the aceB-repressing activity. Sequence analysis of the DNA fragment identified two independent ORFs of ORF1 and ORF2. Among them, ORF2 was turned out to be responsible for the aceB-repressing activity. ORF1 encoded a 23,216 Da protein composed of 206 amino acids. Sequence similarity search indicated that the ORF may encode a ECF-type $\sigma$ factor and designated sigH. To identify the function of sigH, C. glutamicum sigH mutant was constructed by gene disruption technique and the sigH mutant showed growth retardation as compared to the wild type strain. In addition, the mutant strain showed sensitivity to oxidative-stress generating agent plumbagin. This result imply that sigH is probably involved in the stress response occurring during normal cell growth.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Impacts assessment of Climate changes in North Korea based on RCP climate change scenarios II. Impacts assessment of hydrologic cycle changes in Yalu River (RCP 기후변화시나리오를 이용한 미래 북한지역의 수문순환 변화 영향 평가 II. 압록강유역의 미래 수문순환 변화 영향 평가)

  • Jeung, Se Jin;Kang, Dong Ho;Kim, Byung Sik
    • Journal of Wetlands Research
    • /
    • v.21 no.spc
    • /
    • pp.39-50
    • /
    • 2019
  • This study aims to assess the influence of climate change on the hydrological cycle at a basin level in North Korea. The selected model for this study is MRI-CGCM 3, the one used for the Coupled Model Intercomparison Project Phase 5 (CMIP5). Moreover, this study adopted the Spatial Disaggregation-Quantile Delta Mapping (SDQDM), which is one of the stochastic downscaling techniques, to conduct the bias correction for climate change scenarios. The comparison between the preapplication and postapplication of the SDQDM supported the study's review on the technique's validity. In addition, as this study determined the influence of climate change on the hydrological cycle, it also observed the runoff in North Korea. In predicting such influence, parameters of a runoff model used for the analysis should be optimized. However, North Korea is classified as an ungauged region for its political characteristics, and it was difficult to collect the country's runoff observation data. Hence, the study selected 16 basins with secured high-quality runoff data, and the M-RAT model's optimized parameters were calculated. The study also analyzed the correlation among variables for basin characteristics to consider multicollinearity. Then, based on a phased regression analysis, the study developed an equation to calculate parameters for ungauged basin areas. To verify the equation, the study assumed the Osipcheon River, Namdaecheon Stream, Yongdang Reservoir, and Yonggang Stream as ungauged basin areas and conducted cross-validation. As a result, for all the four basin areas, high efficiency was confirmed with the efficiency coefficients of 0.8 or higher. The study used climate change scenarios and parameters of the estimated runoff model to assess the changes in hydrological cycle processes at a basin level from climate change in the Amnokgang River of North Korea. The results showed that climate change would lead to an increase in precipitation, and the corresponding rise in temperature is predicted to cause elevating evapotranspiration. However, it was found that the storage capacity in the basin decreased. The result of the analysis on flow duration indicated a decrease in flow on the 95th day; an increase in the drought flow during the periods of Future 1 and Future 2; and an increase in both flows for the period of Future 3.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.