• Title/Summary/Keyword: memory reconstruction

Search Result 79, Processing Time 0.026 seconds

SURFACE RECONSTRUCTION FROM SCATTERED POINT DATA ON OCTREE

  • Park, Chang-Soo;Min, Cho-Hon;Kang, Myung-Joo
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.16 no.1
    • /
    • pp.31-49
    • /
    • 2012
  • In this paper, we propose a very efficient method which reconstructs the high resolution surface from a set of unorganized points. Our method is based on the level set method using adaptive octree. We start with the surface reconstruction model proposed in [20]. In [20], they introduced a very fast and efficient method which is different from the previous methods using the level set method. Most existing methods[21, 22] employed the time evolving process from an initial surface to point cloud. But in [20], they considered the surface reconstruction process as an elliptic problem in the narrow band including point cloud. So they could obtain very speedy method because they didn't have to limit the time evolution step by the finite speed of propagation. However, they implemented that model just on the uniform grid. So they still have the weakness that it needs so much memories because of being fulfilled only on the uniform grid. Their algorithm basically solves a large linear system of which size is the same as the number of the grid in a narrow band. Besides, it is not easy to make the width of band narrow enough since the decision of band width depends on the distribution of point data. After all, as far as it is implemented on the uniform grid, it is almost impossible to generate the surface on the high resolution because the memory requirement increases geometrically. We resolve it by adapting octree data structure[12, 11] to our problem and by introducing a new redistancing algorithm which is different from the existing one[19].

'Demolition and Reconstruction' : The Direction of Organizational Reform in the Field of History and Archives for the Next Government ('해체와 재구성' 차기 정부의 역사·기록 분야 조직 개혁 방향)

  • Kwak, KunHong
    • The Korean Journal of Archival Studies
    • /
    • no.52
    • /
    • pp.39-58
    • /
    • 2017
  • It is the responsibility of the government organization in the field of history and archives to control preproduction of records and their production sphere. Moreover, it should also manage all kinds of archives and presidential records as its function is to manage and share public information, and carry out compilations of historical records. With this, this study explains how having all these functions would make the ideal reorganization of the government. It should correspond to principle of Government's reorganization such as transparency, responsibility, communication. A plan for reformation needs two-track approach. I would like to propose the establishment of the 'Ministry of National Archives' or the 'National Memory Committee' at the organization that is in charge of national records and memory management. It means that the National memory isn't limited to public sphere. In terms of Total archives, the organizations should contain the whole community's memory. This organization should be formed independently.

EFFICIENT COMPUTATION OF COMPRESSIBLE FLOW BY HIGHER-ORDER METHOD ACCELERATED USING GPU (고차 정확도 수치기법의 GPU 계산을 통한 효율적인 압축성 유동 해석)

  • Chang, T.K.;Park, J.S.;Kim, C.
    • Journal of computational fluids engineering
    • /
    • v.19 no.3
    • /
    • pp.52-61
    • /
    • 2014
  • The present paper deals with the efficient computation of higher-order CFD methods for compressible flow using graphics processing units (GPU). The higher-order CFD methods, such as discontinuous Galerkin (DG) methods and correction procedure via reconstruction (CPR) methods, can realize arbitrary higher-order accuracy with compact stencil on unstructured mesh. However, they require much more computational costs compared to the widely used finite volume methods (FVM). Graphics processing unit, consisting of hundreds or thousands small cores, is apt to massive parallel computations of compressible flow based on the higher-order CFD methods and can reduce computational time greatly. Higher-order multi-dimensional limiting process (MLP) is applied for the robust control of numerical oscillations around shock discontinuity and implemented efficiently on GPU. The program is written and optimized in CUDA library offered from NVIDIA. The whole algorithms are implemented to guarantee accurate and efficient computations for parallel programming on shared-memory model of GPU. The extensive numerical experiments validates that the GPU successfully accelerates computing compressible flow using higher-order method.

Shape Reconstruction from Large Amount of Point Data using Repetitive Domain Decomposition Method (반복적 영역분할법을 이용한 대용량의 점데이터로부터의 형상 재구성)

  • Yoo, Dong-Jin
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.11 s.188
    • /
    • pp.93-102
    • /
    • 2006
  • In this study an advanced domain decomposition method is suggested in order to construct surface models from very large amount of points. In this method the spatial domain of interest that is occupied by the input set of points is divided in repetitive manner. First, the space is divided into smaller domains where the problem can be solved independently. Then each subdomain is again divided into much smaller domains where the problem can be solved locally. These local solutions of subdivided domains are blended together to obtain a solution of each subdomain using partition of unity function. Then the solutions of subdomains are merged together in order to construct whole surface model. The suggested methods are conceptually very simple and easy to implement. Since RDDM(Repetitive Domain Decomposition Method) is effective in the computation time and memory consumption, the present study is capable of providing a fast and accurate reconstructions of complex shapes from large amount of point data containing millions of points. The effectiveness and validity of the suggested methods are demonstrated by performing numerical experiments for the various types of point data.

Formation of an intestine-cartilage composite graft for tracheal reconstruction

  • Jheon, Sang-Hoon;Kim, Tae-Hun;Sung, Sook-Whan;Kim, Yu-Mi;Lim, Jeong-Ok;Baek, Woon-Yi;Park, Tae-In
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 2003.09a
    • /
    • pp.107-107
    • /
    • 2003
  • Purpose; Tracheal transplantation is necessary in patients with extensive tracheal stenosis, congenital lesions and many oncologic conditions but bears many critical problems compared with other organ transplantations. The purpose of this study was to make an intestine-cartilage composite graft for potential application for tracheal reconstruction by free intestinal graft. Methods; Hyaline cartilage was harvested from trachea of 2 weeks old New Zealand White Rabbits. Chondrocytes were isolated and cultured for 8 weeks. Cultured chodrocytes were seeded in the PLGA scaffolds and mixed in pluronic gel. Chondrocyte bearing scaffolds and gel mixture were embedded in submucosal area of stomach and colon of 3kg weighted New Zealand White Rabbits under general anesthesia. 10 weeks after implantation, bowels were harvested for evaluation. Results; We could identify implantation site by gross examination and palpation. Developed cartilage made a good frame for shape memory Microscopic examinations include special stain showed absorption of scaffold and cartilage formation even though not fully matured Conclusion; Intestine-cartilage composite graft could be applicable to future tracheal substitute and needs further Investigations.

  • PDF

A 95% accurate EEG-connectome Processor for a Mental Health Monitoring System

  • Kim, Hyunki;Song, Kiseok;Roh, Taehwan;Yoo, Hoi-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.4
    • /
    • pp.436-442
    • /
    • 2016
  • An electroencephalogram (EEG)-connectome processor to monitor and diagnose mental health is proposed. From 19-channel EEG signals, the proposed processor determines whether the mental state is healthy or unhealthy by extracting significant features from EEG signals and classifying them. Connectome approach is adopted for the best diagnosis accuracy, and synchronization likelihood (SL) is chosen as the connectome feature. Before computing SL, reconstruction optimizer (ReOpt) block compensates some parameters, resulting in improved accuracy. During SL calculation, a sparse matrix inscription (SMI) scheme is proposed to reduce the memory size to 1/24. From the calculated SL information, a small world feature extractor (SWFE) reduces the memory size to 1/29. Finally, using SLs or small word features, radial basis function (RBF) kernel-based support vector machine (SVM) diagnoses user's mental health condition. For RBF kernels, look-up-tables (LUTs) are used to replace the floating-point operations, decreasing the required operation by 54%. Consequently, The EEG-connectome processor improves the diagnosis accuracy from 89% to 95% in Alzheimer's disease case. The proposed processor occupies $3.8mm^2$ and consumes 1.71 mW with $0.18{\mu}m$ CMOS technology.

Convergence Complexity Reduction for Block-based Compressive Sensing Reconstruction (블록기반 압축센싱 복원을 위한 수렴 복잡도 저감)

  • Park, Younggyun;Shim, Hiuk Jae;Jeon, Byeungwoo
    • Journal of Broadcast Engineering
    • /
    • v.19 no.2
    • /
    • pp.240-249
    • /
    • 2014
  • According to the compressive sensing theory, it is possible to perfectly reconstruct a signal only with a fewer number of measurements than the Nyquist sampling rate if the signal is a sparse signal which satisfies a few related conditions. From practical viewpoint for image applications, it is important to reduce its computational complexity and memory burden required in reconstruction. In this regard, a Block-based Compressive Sensing (BCS) scheme with Smooth Projected Landweber (BCS-SPL) has been already introduced. However, it still has the computational complexity problem in reconstruction. In this paper, we propose a method which modifies its stopping criterion, tolerance, and convergence control to make it converge faster. Experimental results show that the proposed method requires less iterations but achieves better quality of reconstructed image than the conventional BCS-SPL.

Evaluation of Open-source Software for Participatory Digital Archives: Understanding System Requirements for No Gun Ri Digital Archives (참여형 아카이브 구축을 위한 오픈소스 소프트웨어 평가 - 노근리디지털아카이브 구축을 위한 예비분석 -)

  • Park, Taeyeon;Sinn, Donghee
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.16 no.1
    • /
    • pp.121-150
    • /
    • 2016
  • This paper reports the evaluation of six open-source software systems for participatory digital archives. This is an effort to create a digital platform for the social memory of No Gun Ri, which was first recognized in 1999 as a civilian massacre. The process of how it was reported and investigated is critical to understanding this brutal incident. In addition, the course of its cultural recovery has witnessed the reconstruction of the No Gun Ri memory. Thus, it is important to embrace the social memory around the massacre in these archives. In consideration of a virtual space for memory, this study takes the form of participatory archives to provide a mechanism in which anyone can share their memories. As a way to find a digital archives system for No Gun Ri, this study analyzed open-source software based on identified functions and requirements for participatory digital archives. Knowing the details of digital systems, this study discussed how contents for social memory can be stored and used in a digital system.

A Study on Gusadang Kim Nakhaeng's Writing for Ancestral Rites - Exploring the source of his appealing (구사당(九思堂) 김낙행(金樂行)의 제문(祭文) 연구(硏究) - 호소력의 근원에 대한 탐색 -)

  • Jeong, Si-youl
    • (The)Study of the Eastern Classic
    • /
    • no.59
    • /
    • pp.93-120
    • /
    • 2015
  • The purpose of this study is to explore the source of appealing which Gusadang Kim Nakhaeng's writing for ancestral rites is equipped with. Gusadang was one of the Confucianists in Yeongnam during the 18th century and was praised for his scholarly virtue of jihaenghapil and silcheongunghaeng. Although Gusadang's writing for ancestral rites and his teacher Milam Lee Jaeui's letters were even specially named as 'gujemilchal', there has been almost no research on Gusadang's writing for ancestral rites yet. Therefore, this study selects three pieces of Gusadang's writing for ancestral rites which are especially rich in emotional expression for discussion. Chapter 2 titled as 'the Reconstruction of Memory in a Microscopic Perspective' presents the reason why Gusadang's writing for ancestral rites is recognized even as a piece of work equipped with appealing. Writing for ancestral rites begins from the point that there exists memory that can be shared by both the living and the dead. In reconstructing the anecdote with the dead on the stage of ritual writing in detail, the writer's memory plays an important role. Chapter 3 titled as 'the Rhetorical Reconstruction of Elevated Sensitivity' examines rhetorical devices needed for writing for ancestral rites. Proper rhetoric is needed to upgrade the dignity of the ritual writing and arouse sympathy from the readers. Although writing for ancestral rites is supposed to express sadness in terms of its formal characteristics, it should not end up being a mere outlet of emotion. Chapter 4 looks into 'the Descriptive Reconstruction of Lamenting Sentiment'. There should be a clear focus of description to make the gesture of the living towards the being not existing in the world any longer an appealing story. While maintaining a distinct way of description, Gusadang organizes the noble character of the dead, pitiable death, the precious bond in the past, and the longing of those left for the dead systematically. Writing for ancestral rites is a field to mourn over the death and reproduce the sadness of the living through writing. To make the text written in that way get to work as ritual writing properly, it should be appealing necessarily. This study has found the fact that such appealing that gives life to ritual writing is grounded on authenticity.

An Efficient Flash Memory B-Tree Supporting Very Cheap Node Updates (플래시 메모리 B-트리를 위한 저비용 노드 갱신 기법)

  • Lim, Seong-Chae
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.706-716
    • /
    • 2016
  • Because of efficient space utilization and fast key search times, B-trees have been widely accepted for the use of indexes in HDD-based DBMSs. However, when the B-ree is stored in flash memory, its costly operations of node updates may impair the performance of a DBMS. This is because the random updates in B-tree's leaf nodes could tremendously enlarge I/O costs for the garbage collecting actions of flash storage. To solve the problem, we make all the parents of leaf nodes the virtual nodes, which are not stored physically. Rather than, those nodes are dynamically generated and buffered by referring to their child nodes, at their access times during key searching. By performing node updates and tree reconstruction within a single flash block, our proposed B-tree can reduce the I/O costs for garbage collection and update operations in flash. Moreover, our scheme provides the better performance of key searches, compared with earlier flash-based B-trees. Through a mathematical performance model, we verify the performance advantages of the proposed flash B-tree.