• Title/Summary/Keyword: 2D map

Search Result 1,090, Processing Time 0.025 seconds

A Study on the Optimization of Deburring Process for the Micro Channel using EP-MAP Hybrid Process (전해-자기 복합 가공을 이용한 마이크로 채널 디버링공정 최적화)

  • Lee, Sung-Ho;Kwak, Jae-Seob
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.22 no.2
    • /
    • pp.298-303
    • /
    • 2013
  • Magnetic abrasive polishing is one of the most promising finishing methods applicable to complex surfaces. Nevertheless this process has a low efficiency when applied to very hardened materials. For this reason, EP-MAP hybrid process was developed. EP-MAP process is expected to machine complex and hardened materials. In this research, deburring process using EP-MAP hybrid process was proposed. EP-MAP deburring process is applied to micro channel, thereby it can obtain both deburring process and polishing process. EP-MAP deburring process on the micro channel was performed. Through design of experiment method, error of height in this process according to process parameter is analyzed. When the level 1 parameter A(magnetic flux density) and level 2 parameter B(electric potential), C(working gap) and level 3 parameter D(feed rate) are applied in the deburring process using EP-MAP hybrid process, it provides optimum result of EP-MAP hybrid deburring process.

Analysis of overlap ratio for registration accuracy improvement of 3D point cloud data at construction sites (건설현장 3차원 점군 데이터 정합 정확성 향상을 위한 중첩비율 분석)

  • Park, Su-Yeul;Kim, Seok
    • Journal of KIBIM
    • /
    • v.11 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • Comparing to general scanning data, the 3D digital map for large construction sites and complex buildings consists of millions of points. The large construction site needs to be scanned multiple times by drone photogrammetry or terrestrial laser scanner (TLS) survey. The scanned point cloud data are required to be registrated with high resolution and high point density. Unlike the registration of 2D data, the matrix of translation and rotation are used for registration of 3D point cloud data. Archiving high accuracy with 3D point cloud data is not easy due to 3D Cartesian coordinate system. Therefore, in this study, iterative closest point (ICP) registration method for improve accuracy of 3D digital map was employed by different overlap ratio on 3D digital maps. This study conducted the accuracy test using different overlap ratios of two digital maps from 10% to 100%. The results of the accuracy test presented the optimal overlap ratios for an ICP registration method on digital maps.

Improved Disparity Map Computation on Stereoscopic Streaming Video with Multi-core Parallel Implementation

  • Kim, Cheong Ghil;Choi, Yong Soo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.728-741
    • /
    • 2015
  • Stereo vision has become an important technical issue in the field of 3D imaging, machine vision, robotics, image analysis, and so on. The depth map extraction from stereo video is a key technology of stereoscopic 3D video requiring stereo correspondence algorithms. This is the matching process of the similarity measure for each disparity value, followed by an aggregation and optimization step. Since it requires a lot of computational power, there are significant speed-performance advantages when exploiting parallel processing available on processors. In this situation, multi-core CPU may allow many parallel programming technologies to be realized in users computing devices. This paper proposes parallel implementations for calculating disparity map using a shared memory programming and exploiting the streaming SIMD extension technology. By doing so, we can take advantage both of the hardware and software features of multi-core processor. For the performance evaluation, we implemented a parallel SAD algorithm with OpenMP and SSE2. Their processing speeds are compared with non parallel version on stereoscopic streaming video. The experimental results show that both technologies have a significant effect on the performance and achieve great improvements on processing speed.

Dynamic Generation Methods of the Wireless Map Database using Generalization and Filtering (Generalization과 Filtering을 이용한 무선 지도 데이터베이스의 동적 생성 기법)

  • Kim, Mi-Ran;Choe, Jin-O
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.367-376
    • /
    • 2001
  • For the electronic map service by wireless, the existing map database cannot be used directly. This is because, the data volume of a map is too big to transfer by wireless and although the map is transferred successfully, the devices to display the map usually don’t have enough resources as the ones for desktop computers. It is also not acceptable to construct map database for the exclusive use of wireless service because of the vast cost. We propose new technique to generate a map for wireless service dynamically, from the existing map database. This technique includes the generalization method to reduce the map data volume and filtering method to guarantee that the data volume don’t exceed the limit of bandwidth. The generalization is performed in 3 steps :ㅁ step of merging the layers, a step of reducing the size of spatial objects, and a step of processing user interface. The filtering is performed by 2 module, counter and selector module. The counter module checks whether the data blume of generated map by generalization, exceeds the bandwidth limit. The selector module eliminates the excess objects and selects the rest, on the basis of distance.

  • PDF

Evaluation of Non-iterative Shimming Using 2-D Field Map Compared with Simplex Shimming

  • Park, Min-Seok;Kim, Si-Seung;Park, Dae-Jun;Chung, Sung-Taek
    • Proceedings of the KSMRM Conference
    • /
    • 2001.11a
    • /
    • pp.152-152
    • /
    • 2001
  • Purpose: The most common instrumental approach to automatic shimming has been based on iterativ. optimization routine(e.g., simplex) to adjust shim settings to maximize the envelope of the FID. Disadvantage of iterative method, however, is very long to compute shim values. Thi paper supposes a non-iterative method that uses 2-D field map to adjust shim settin rapidly.

  • PDF

Evaluation of MR-SENSE Reconstruction by Filtering Effect and Spatial Resolution of the Sensitivity Map for the Simulation-Based Linear Coil Array (선형적 위상배열 코일구조의 시뮬레이션을 통한 민감도지도의 공간 해상도 및 필터링 변화에 따른 MR-SENSE 영상재구성 평가)

  • Lee, D.H.;Hong, C.P.;Han, B.S.;Kim, H.J.;Suh, J.J.;Kim, S.H.;Lee, C.H.;Lee, M.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.245-250
    • /
    • 2011
  • Parallel imaging technique can provide several advantages for a multitude of MRI applications. Especially, in SENSE technique, sensitivity maps were always required in order to determine the reconstruction matrix, therefore, a number of difference approaches using sensitivity information from coils have been demonstrated to improve of image quality. Moreover, many filtering methods were proposed such as adaptive matched filter and nonlinear diffusion technique to optimize the suppression of background noise and to improve of image quality. In this study, we performed SENSE reconstruction using computer simulations to confirm the most suitable method for the feasibility of filtering effect and according to changing order of polynomial fit that were applied on variation of spatial resolution of sensitivity map. The image was obtained at 0.32T(Magfinder II, Genpia, Korea) MRI system using spin-echo pulse sequence(TR/TE = 500/20 ms, FOV = 300 mm, matrix = $128{\times}128$, thickness = 8 mm). For the simulation, obtained image was multiplied with four linear-array coil sensitivities which were formed of 2D-gaussian distribution and the image was complex white gaussian noise was added. Image processing was separated to apply two methods which were polynomial fitting and filtering according to spatial resolution of sensitivity map and each coil image was subsampled corresponding to reduction factor(r-factor) of 2 and 4. The results were compared to mean value of geomety factor(g-factor) and artifact power(AP) according to r-factor 2 and 4. Our results were represented while changing of spatial resolution of sensitivity map and r-factor, polynomial fit methods were represented the better results compared with general filtering methods. Although our result had limitation of computer simulation study instead of applying to experiment and coil geometric array such as linear, our method may be useful for determination of optimal sensitivity map in a linear coil array.

Design and Implementation of Object Reusing Methods for Mobile Vector Map Services (모바일 벡터 지도 서비스를 위한 객체 재사용 기법의 설계 및 구현)

  • Kim, Jin-Deog;Choi, Jin-Oh
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.359-366
    • /
    • 2003
  • Although the reuse of the cached data for scrolling the map reduces the amount of passed data between client and server, it needs the conversions of data coordinates, selective deletion of objects, cache compaction and object structuring step in the clients. The conversion is a time- intensive operation due to limited resources of mobile phones such as low computing power, small memory. Therefore, in order to control the map efficiently in the vector map service based mobile phones, it is necessary to study the methods which reuse cached objects for reducing wireless network bandwidth and overwhelming the limited resources of mobile phones as well. This paper proposes the methods of reusing pre-received spatial objects for map control in the mobile vector map service system based on client-server architecture. The experiments conducted on the Web GIS systems with real data show that the proposed method is appropriate to map services for mobile phone. We also analyze the advantages and drawbacks between the reuse of cached data and transmission of raw data respectively.

Update of Digital Map by using The Terrestrial LiDAR Data and Modified RANSAC (수정된 RANSAC 알고리즘과 지상라이다 데이터를 이용한 수치지도 건물레이어 갱신)

  • Kim, Sang Min;Jung, Jae Hoon;Lee, Jae Bin;Heo, Joon;Hong, Sung Chul;Cho, Hyoung Sig
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.3-11
    • /
    • 2014
  • Recently, rapid urbanization has necessitated continuous updates in digital map to provide the latest and accurate information for users. However, conventional aerial photogrammetry has some restrictions on periodic updates of small areas due to high cost, and as-built drawing also brings some problems with maintaining quality. Alternatively, this paper proposes a scheme for efficient and accurate update of digital map using point cloud data acquired by Terrestrial Laser Scanner (TLS). Initially, from the whole point cloud data, the building sides are extracted and projected onto a 2D image to trace out the 2D building footprints. In order to register the footprint extractions on the digital map, 2D Affine model is used. For Affine parameter estimation, the centroids of each footprint groups are randomly chosen and matched by means of a modified RANSAC algorithm. Based on proposed algorithm, the experimental results showed that it is possible to renew digital map using building footprint extracted from TLS data.

Image Encryption Based on Quadruple Encryption using Henon and Circle Chaotic Maps

  • Hanchinamani, Gururaj;Kulkarni, Linganagouda
    • Journal of Multimedia Information System
    • /
    • v.2 no.2
    • /
    • pp.193-206
    • /
    • 2015
  • In this paper a new approach for image encryption based on quadruple encryption with dual chaotic maps is proposed. The encryption process is performed with quadruple encryption by invoking the encrypt and decrypt routines with different keys in the sequence EDEE. The decryption process is performed in the reverse direction DDED. The key generation for the quadruple encryption is achieved with a 1D Circle map. The chaotic values for the encrypt and decrypt routines are generated by using a 2D Henon map. The Encrypt routine E is composed of three stages i.e. permutation, pixel value rotation and diffusion. The permutation is achieved by: row and column scrambling with chaotic values, exchanging the lower and the upper principal and secondary diagonal elements based on the chaotic values. The second stage circularly rotates all the pixel values based on the chaotic values. The last stage performs the diffusion in two directions (forward and backward) with two previously diffused pixels and two chaotic values. The security and performance of the proposed scheme are assessed thoroughly by using the key space, statistical, differential, entropy and performance analysis. The proposed scheme is computationally fast with security intact.

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF