• Title/Summary/Keyword: data extract

Search Result 3,961, Processing Time 0.029 seconds

The Modeling of the Optimal Data Format for JPEG2000 CODEC on the Fixed Compression Ratio (고정 압축률에서의 JPEG2000 코덱을 위한 최적의 데이터 형식 모델링)

  • Kang, Chang-Soo;Seo, Choon-Weon
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1257-1260
    • /
    • 2005
  • This paper is related to optimization in the image data format, which can make a great effect in performance of data compression and is based on the wavelet transform and JPEG2000. This paper established a criterion to decide the data format to be used in wavelet transform, which is on the bases of the data errors in frequency transform and quantization. This criterion has been used to extract the optimal data format experimentally. The result were (1, 9) of 10-bit fixed-point format for filter coefficients and (9, 7) of 16-bit fixed-point data format for wavelet coefficients and their optimality was confirmed.

  • PDF

Reversible Data Hiding Scheme Based on Maximum Histogram Gap of Image Blocks

  • Arabzadeh, Mohammad;Rahimi, Mohammad Reza
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.8
    • /
    • pp.1964-1981
    • /
    • 2012
  • In this paper a reversible data hiding scheme based on histogram shifting of host image blocks is presented. This method attempts to use full available capacity for data embedding by dividing the image into non-overlapping blocks. Applying histogram shifting to each block requires that extra information to be saved as overhead data for each block. This extra information (overhead or bookkeeping information) is used in order to extract payload and recover the block to its original state. A method to eliminate the need for this extra information is also introduced. This method uses maximum gap that exists between histogram bins for finding the value of pixels that was used for embedding in sender side. Experimental results show that the proposed method provides higher embedding capacity than the original reversible data hiding based on histogram shifting method and its improved versions in the current literature while it maintains the quality of marked image at an acceptable level.

Generation of DEM Data Under Forest Canopy Using Airborne Lidar

  • Woo Choong-Shik;Kim Tae-Guen;Shin Jung-Il;Lee Kyu-Sung
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.512-514
    • /
    • 2005
  • Accurate DEM surface of forest floor is very important to extract any meaningful information regarding forest stand structure, such as tree heights, stand density, crown morphology, and biomass. In airborne lidar data processing, DEM data of forest floor is mostly generated by interpolating those elevation points obtained from last laser returns. In this study, we try to analyze the property of the last laser return under relatively dense forest canopy. Airborne laser data were obtained over the study area in relatively dense pine plantation forest. Two DEM data were generated by using all the points in the last laser returns and using only those points after removing non-ground points. From the preliminary analysis on these DEM data, we found that more than half of points among the last laser returns are actually hit from canopy, branches, and understory vegetation that should be removed before generating the surface DEM data.

  • PDF

A Study on the 3-D Digital Modelling of the Sea Bottom Topography (3차원 해저지형 수치모델에 관한 연구)

  • 양승윤;김정훈;김병준;김경섭
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.5 no.3
    • /
    • pp.33-44
    • /
    • 2002
  • In this study, 3-dimensional virtual visualization was performed for a rapid and accurate analysis of sea bottom topography. The visualization was done through the extracted data using the developed program and the generated data using the gridding method. The data extraction program was developed with AutoLISP programming language and this program was able to extract the needed sample bathymetry data from the electronic sea chart systematically as well as effectively The gridded bathymetry data were generated by the interpolation or extrapolation method from the spatially-irregular sample data. As the result of realization for the 3-dimensional virtual visualization, it was shown a proper feasibility in the analysis of the sea bottom topography to determine the route of submarine cable burial.

Data Attribute Extraction Method by using SEDRIS Technology (SEDRIS 기술을 이용한 데이터 애트리뷰트 추출 방법)

  • Lee, Kwang-Hyung
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.2
    • /
    • pp.53-60
    • /
    • 2003
  • The M&S community needs an environmental data representation and interchange mechanism which not only satisfies the requirements of today's systems, but can be extended to meet future data sharing needs. This mechanism must allow for the standard representation of, and access to data. It must support databases containing integrated terrain, ocean, atmosphere, and space data. The SEDRIS provides environment data users and producers with a clearly defined interchange specification. In this paper I present the method to extract the data attributes contained in synthetic environment domain using SEDRIS technology and API.

  • PDF

A Study on the Hybrid Data Mining Mechanism Based on Association Rules and Fuzzy Neural Networks (연관규칙과 퍼지 인공신경망에 기반한 하이브리드 데이터마이닝 메커니즘에 관한 연구)

  • Kim Jin Sung
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.884-888
    • /
    • 2003
  • In this paper, we introduce the hybrid data mining mechanism based in association rule and fuzzy neural networks (FNN). Most of data mining mechanisms are depended in the association rule extraction algorithm. However, the basic association rule-based data mining has not the learning ability. In addition, sequential patterns of association rules could not represent the complicate fuzzy logic. To resolve these problems, we suggest the hybrid mechanism using association rule-based data mining, and fuzzy neural networks. Our hybrid data mining mechanism was consisted of four phases. First, we used general association rule mining mechanism to develop the initial rule-base. Then, in the second phase, we used the fuzzy neural networks to learn the past historical patterns embedded in the database. Third, fuzzy rule extraction algorithm was used to extract the implicit knowledge from the FNN. Fourth, we combine the association knowledge base and fuzzy rules. Our proposed hybrid data mining mechanism can reflect both association rule-based logical inference and complicate fuzzy logic.

  • PDF

Surface Extraction from Multi-material CT Data

  • Fujimori, Tomoyuki;Suzuki, Hiromasa
    • International Journal of CAD/CAM
    • /
    • v.6 no.1
    • /
    • pp.81-87
    • /
    • 2006
  • This paper describes a method for extracting surfaces from multi-material CT (Computed Tomography) data. Most contouring methods such as Marching Cubes algorithm assume that CT data are composed of only two materials. Some extended methods such as [3, 6] can extract surfaces from the multi-material (non-manifold) implicit representation. However, these methods are not directly applicable to CT data that are composed of three or more materials. There are two major problems that arise from fundamentals of CT. The first problem is that we have to use n(n-1)/2 threshold values for CT data contains n materials and select appropriately one threshold value for each boundary area. The second is that we cannot reconstruct only from CT data in which area three or more materials are adjacent each other. In this paper, we propose a method to solve the problems by using image analysis and demonstrate the effectiveness of the method with application examples construct polygon models from CT data of machine parts.

Phase inversion of seismic data

  • Kim, Won-Sik;Shin, Chang-Soo;Park, Kun-Pil
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.459-463
    • /
    • 2003
  • Waveform inversion requires extracting a reliable low frequency content of seismic data for estimating of the low wave number velocity model. The low frequency content of the seismic data is usually discarded or neglected because of the band-limited response of the source and the receivers. In this study, however small the spectral of the low frequency seismic data is, we assume that it is possible to extract a reliable phase information of the low frequency from the seismic data and use it in waveform inversion. To this end, we exploit the frequency domain finite element modeling and source-receiver reciprocity to calculate the $Frech\`{e}t$ derivative of the phase of the seismic data with respect to the earth model parameter such as velocity, and then apply a damped least squares method to invert the phase of the seismic data. Through numerical example, we will attempt to demonstrate the feasibility of our method in estimating the correct velocity model for prestack depth migration.

  • PDF

A Clustering Tool Using Particle Swarm Optimization for DNA Chip Data

  • Han, Xiaoyue;Lee, Min-Soo
    • Genomics & Informatics
    • /
    • v.9 no.2
    • /
    • pp.89-91
    • /
    • 2011
  • DNA chips are becoming increasingly popular as a convenient way to perform vast amounts of experiments related to genes on a single chip. And the importance of analyzing the data that is provided by such DNA chips is becoming significant. A very important analysis on DNA chip data would be clustering genes to identify gene groups which have similar properties such as cancer. Clustering data for DNA chips usually deal with a large search space and has a very fuzzy characteristic. The Particle Swarm Optimization algorithm which was recently proposed is a very good candidate to solve such problems. In this paper, we propose a clustering mechanism that is based on the Particle Swarm Optimization algorithm. Our experiments show that the PSO-based clustering algorithm developed is efficient in terms of execution time for clustering DNA chip data, and thus be used to extract valuable information such as cancer related genes from DNA chip data with high cluster accuracy and in a timely manner.

Implementing a Sustainable Decision-Making Environment - Cases for GIS, BIM, and Big Data Utilization -

  • Kim, Hwan-Yong
    • Journal of KIBIM
    • /
    • v.6 no.3
    • /
    • pp.24-33
    • /
    • 2016
  • Planning occurs from day-to-day, small-scale decisions to large-scale infrastructure investment decisions. For that reason, various attempts have been made to appropriately assist decision-making process and its optimization. Lately, initiation of a large amount of data, also known as big data has received great attention from diverse disciplines because of versatility and adoptability in its use and possibility to generate new information. Accordingly, implementation of big data and other information management systems, such as geographic information systems (GIS) and building information modeling (BIM) have received enough attention to establish each of its own profession and other associated activities. In this extent, this study illustrates a series of big data implementation cases that can provide a lesson to urban planning domain. In specific, case studies analyze how data was used to extract the most optimized solution and what aspects could be helpful in relation to planning decisions. Also, important notions about GIS and its application in various urban cases are examined.