• Title/Summary/Keyword: high-volume data

Search Result 1,123, Processing Time 0.031 seconds

A VOLUME OF FLUID METHOD FOR FREE SURFACE FLOWS AROUND SHIP HULLS (선체주위 자유수면 유동 해석을 위한 VOF법 연구)

  • Park, I.R.
    • Journal of computational fluids engineering
    • /
    • v.20 no.1
    • /
    • pp.57-64
    • /
    • 2015
  • This paper describes a volume of fluid(VOF) method, mRHRIC for the simulation of free surface flows around ship hulls and provides its validation against benchmark test cases. The VOF method is developed on the basis of RHRIC method developed by Park et al. that uses high resolution differencing schemes to algebraically preserve both the sharpness of interface and the boundedness of volume fraction. A finite volume method is used to solve the governing equations, while the realizable ${\kappa}-{\varepsilon}$ model is used for turbulence closure. The present numerical results of the resistance performance tests for DTMB5415 and KCS hull forms show a good agreement with available experimental data and those of other free surface methods.

Efficient Test Data Compression and Low Power Scan Testing in SoCs

  • Jung, Jun-Mo;Chong, Jong-Wha
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.321-327
    • /
    • 2003
  • Testing time and power consumption during the testing of SoCs are becoming increasingly important with an increasing volume of test data in intellectual property cores in SoCs. This paper presents a new algorithm to reduce the scan-in power and test data volume using a modified scan latch reordering algorithm. We apply a scan latch reordering technique to minimize the column hamming distance in scan vectors. During scan latch reordering, the don't-care inputs in the scan vectors are assigned for low power and high compression. Experimental results for ISCAS 89 benchmark circuits show that reduced test data and low power scan testing can be achieved in all cases.

  • PDF

Emerging Technologies for Construction Data Collection

  • Han, Seung-Woo
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2006.11a
    • /
    • pp.181-186
    • /
    • 2006
  • Estimation based on current data of construction performances have become one of the critical subjects which many researchers have been interested in for the past decades. In order to accomplish accurate measurement and estimation of construction performances, the method of data collection stands the highest priority. However, there are many difficulties in data collection from construction jobsite due to the characteristics of the construction industry. With developments of new technologies in other industries, several technologies has recently initiated to be applied to construction field. Electronic tags based on the identification technology, automatic volume measurement based on laser scanning technology, and Global Positioning System (GPS) have been represented the technologies which show the high opportunity for being used in construction. This study reviews specific aspects of these technologies focused on the utilization in construction jobsite. Also, the challenges which these technologies need to overcome are discussed.

  • PDF

An X-ray Diffraction Study on ZrH2 under High Pressures (고압하에서 ZrH2에 대한 X-선 회절 연구)

  • 김영호
    • Journal of the Mineralogical Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.35-42
    • /
    • 1996
  • Polycrystalline ZrH2 in tetragonal crystal system has been compressed in a modified Bassett-type diamond anvil cell up to 36.0 GPa at room temperature. X-ray diffraction data did not indicate any phase transitions at the present pressure range. The pressure dependence of the a-axis, c-axis, c/a and molar volume of ZrH2 was determined at pressures up to 36.0 GPa. Assuming the pressure derivative of the bulk modulus (K0') to be 4.11 from an ultrasonic value on Zr, bulk modulus (K0) was determined to be 160Gpa by fitting the pressure-volume data to the Birch-Murnaghan equation of state. Same sample was heated at $500^{\circ}C$ at the pressure of 9.8 GPa in a modified Sung-type diamond anvil cell. Unloaded and quenched sample revealed that the original tetragonal structure transforms into a hexagonal structured phase with a zero-pressure molar volume change of ~115.5%.

  • PDF

Visualization for Digesting a High Volume of the Biomedical Literature

  • Lee, Chang-Su;Park, Jin-Ah;Park, Jong-C.
    • Bioinformatics and Biosystems
    • /
    • v.1 no.1
    • /
    • pp.51-60
    • /
    • 2006
  • The paradigm in biology is currently changing from that of conducting hypothesis-driven individual experiments to that of utilizing the results of a massive data analysis with appropriate computational tools. We present LayMap, an implemented visualization system that helps the user to deal with a high volume of the biomedical literature such as MEDLINE, through the layered maps that are constructed on the results of an information extraction system. LayMap also utilizes filtering and granularity for an enhanced view of the results. Since a biomedical information extraction system gives rise to a focused and effective way of slicing up the data space, the combined use of LayMap with such an information extraction system can help the user to navigate the data space in a speedy and guided manner. As a case study, we have applied the system to datasets of journal abstracts on 'MAPK pathway' and 'bufalin' from MEDLINE. With the proposed visualization, we have successfully rediscovered pathway maps of a reasonable quality for ERK, p38 and JNK. Furthermore, with respect to bufalin, we were able to identify the potentially interesting relation between the Chinese medicine Chan su and apoptosis with a high level of detail.

  • PDF

Volume Holographic Optical Fingerprint Identification for Secure Entry System (안전 출입 시스템을 위한 체적 홀로그래픽 광지문인식)

  • Lee, S.H.;Park, M.S.;Shim, W.S.
    • Journal of the Korean Society of Safety
    • /
    • v.14 no.4
    • /
    • pp.204-210
    • /
    • 1999
  • We propose an optical fingerprint identification system using volume hologram for database of matched filter. Matched filters in VanderLugt correlator are recorded into a volume hologram that can store data with high density, transfer them with high speed, and select a randomly chosen data element. The multiple reference fingerprint photographs of database are prerecorded in a photorefractive material in the form of Fourier transform images, simply by passing the image displayed in a spatial light modulator through a Fourier transform lens. The angular multiplexing method for multiple holograms of database is achieved by controlling the reference directions with a step motor. Experimental results show that the proposed system can be used for secure entry systems to identify individuals for access to a restricted area, security verification of credit cards, passports, and other IDs.

  • PDF

Predictive Memory Allocation over Skewed Streams

  • Yun, Hong-Won
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.2
    • /
    • pp.199-202
    • /
    • 2009
  • Adaptive memory management is a serious issue in data stream management. Data stream differ from the traditional stored relational model in several aspect such as the stream arrives online, high volume in size, skewed data distributions. Data skew is a common property of massive data streams. We propose the predicted allocation strategy, which uses predictive processing to cope with time varying data skew. This processing includes memory usage estimation and indexing with timestamp. Our experimental study shows that the predictive strategy reduces both required memory space and latency time for skewed data over varying time.

An Experiment on Volume Data Compression and Visualization using Wavelet Transform (웨이블릿 변환을 이용한 볼륨데이타의 압축 및 가시화 실험)

  • 최임석;권오봉;송주환
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.6
    • /
    • pp.646-661
    • /
    • 2003
  • It is not easy that we visualize the large volume data stored in the every client computers of the web environment. One solution is as follows. First we compress volume data, second store that in the database server, third transfer that to client computer, fourth visualize that with direct-volume-rendering in the client computer. In this case, we usually use wavelet transform for compressing large data. This paper reports the experiments for acquiring the wavelet bases and the compression ratios fit for the above processing paradigm. In this experiments, we compress the volume data Engine, CThead, Bentum into 50%, 10%, 5%, 1%, 0.1%, 0.03% of the total data respectively using Harr, Daubechies4, Daubechies12 and Daubechies20 wavelets, then visualize that with direct-volume-rendering, afterwards evaluate the images with eyes and image comparison metrics. When compression ratio being low the performance of Harr wavelet is better than the performance of the other wavelets, when compression ratio being high the performance of Daubechies4 and Daubechies12 is better than the performance of the other wavelets. When measuring with eyes the good compression ratio is about 1% of all the data, when measuring with image comparison metrics, the good compression ratio is about 5-10% of all the data.

k-NN Join Based on LSH in Big Data Environment

  • Ji, Jiaqi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.99-105
    • /
    • 2018
  • k-Nearest neighbor join (k-NN Join) is a computationally intensive algorithm that is designed to find k-nearest neighbors from a dataset S for every object in another dataset R. Most related studies on k-NN Join are based on single-computer operations. As the data dimensions and data volume increase, running the k-NN Join algorithm on a single computer cannot generate results quickly. To solve this scalability problem, we introduce the locality-sensitive hashing (LSH) k-NN Join algorithm implemented in Spark, an approach for high-dimensional big data. LSH is used to map similar data onto the same bucket, which can reduce the data search scope. In order to achieve parallel implementation of the algorithm on multiple computers, the Spark framework is used to accelerate the computation of distances between objects in a cluster. Results show that our proposed approach is fast and accurate for high-dimensional and big data.

Large-Scale Ultrasound Volume Rendering using Bricking (블리킹을 이용한 대용량 초음파 볼륨 데이터 렌더링)

  • Kim, Ju-Hwan;Kwon, Koo-Joo;Shin, Byeong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.117-126
    • /
    • 2008
  • Recent advances in medical imaging technologies have enabled the high-resolution data acquisition. Therefore visualization of such large data set on standard graphics hardware became a popular research theme. Among many visualization techniques, we focused on bricking method which divided the entire volume into smaller bricks and rendered them in order. Since it switches bet\W8n bricks on main memory and bricks on GPU memory on the fly, to achieve better performance, the number of these memory swapping conditions has to be minimized. And, because the original bricking algorithm was designed for regular volume data such as CT and MR, when applying the algorithm to ultrasound volume data which is based on the toroidal coordinate space, it revealed some performance degradation. In some areas near bricks' boundaries, an orthogonal viewing ray intersects the single brick twice, and it consequently makes a single brick memory to be uploaded onto GPU twice in a single frame. To avoid this redundancy, we divided the volume into bricks allowing overlapping between the bricks. In this paper, we suggest the formula to determine an appropriate size of these shared area between the bricks. Using our formula, we could minimize the memory bandwidth. and, at the same time, we could achieve better rendering performance.

  • PDF