• Title/Summary/Keyword: Geometrically Defined Data

Search Result 9, Processing Time 0.015 seconds

A Voxelization for Geometrically Defined Objects Using Cutting Surfaces of Cubes (큐브의 단면을 이용한 기하학적인 물체의 복셀화)

  • Gwun, Ou-Bong
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.157-164
    • /
    • 2003
  • Volume graphics have received a lot of attention as a medical image analysis tool nowadays. In the visualization based on volume graphics, there is a process called voxelization which transforms the geometrically defined objects into the volumetric objects. It enables us to volume render the geometrically defined data with sampling data. This paper suggests a voxeliration method using the cutting surfaces of cubes, implements the method on a PC, and evaluates it with simple geometric modeling data to explore propriety of the method. This method features the ability of calculating the exact normal vector from a voxel, having no hole among voxels, having multi-resolution representation.

Incremental-runlength distribution for Markov graphic data source (Markov 그라픽 데이타에 대한 incremental-runlength의 확률분포)

  • 김재균
    • 전기의세계
    • /
    • v.29 no.6
    • /
    • pp.389-392
    • /
    • 1980
  • For Markov graphic source, it is well known that the conditional runlength coding for the runs of correct prediction is optimum for data compression. However, because of the simplicity in counting and the stronger concentration in distrubution, the incremental run is possibly a better parameter for coding than the run itself for some cases. It is shown that the incremental-runlength is also geometrically distributed as the runlength itself. The distribution is explicitly described with the basic parameters defined for a Markov model.

  • PDF

New Breast Measurement Technique and Bra Sizing System Based on 3D Body Scan Data

  • Oh, Seolyoung;Chun, Jongsuk
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.4
    • /
    • pp.299-311
    • /
    • 2014
  • Objective: The aim of this study was to develop a method for measuring breast size from three-dimensional (3D) body scan image data. Background: Previous bra studies established reference points by directly contacting the subject's naked skin to determine the boundary of the breast. But some subjects were uncomfortable with these types of measurements. This study examined noncontact methods of extracting breast reference points from 3D body scan data that were collected while subjects were wearing standardized soft bras. Method: 3D body scan data of 32 Korean women were analyzed. The subjects were selected from the Size Korea 2010 study. The breast landmarks were identified by graphic analyses of slicing contour lines on 3D body scan data. Results: Three methods determining bra cup size were compared. The M1 and M2 methods determined cup size by calculating the difference between bust girth and under-bust girth. The M3 method determined bra cup size by measuring breast arc length. Conclusion: The researchers proposed an anthropometric bra cup sizing system with the breast arc length (M3 method). It was measured from the geometrically defined landmarks on the 3D body scan slicing contour lines. The new bra cup size was highly correlated with breast depth. Application: The noncontact measuring method used in this study can be applied to the ergonomic studies measuring sensitive body parts.

The application of geometrically exact shell element to NURBS generated by NLib (기하학적으로 정확한 쉘 요소의 NLib에 의해 생성된 NURBS 곡면에의 적용)

  • Choi Jin-Bok;Oh Hee-Yuel;Cho Maeng-Hyo
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2005.04a
    • /
    • pp.301-308
    • /
    • 2005
  • In this study, we implement a framework that directly links a general tensor-based shell finite element to NURBS geometric modeling. Generally, in CAD system the surfaces are represented by B-splines or non-uniform rational B-spline(NURBS) blending functions and control points. Here, NURBS blending functions are composed by two parameters defined in local region. A general tensor-based shell element also has a two-parameter representation in the surfaces, and all the computations of geometric quantities can be performed in local surface patch. Naturally, B-spline surface or NURBS function could be directly linked to the shell analysis routine. In our study, we use NLib(NURBS libraray) to generate NURBS for shell finite analysis. The NURBS can be easily generated by interpolating or approximating given set of data points through NLib.

  • PDF

Determination of Feed System and Process Conditions for Injection Molding of Automotive Connector Part with Two Warpage Design Characteristics (두 개의 휨 설계특성을 갖는 자동차 커넥터 부품의 사출성형을 위한 피드 시스템 및 공정조건의 결정)

  • Yu, Man-Jun;Park, Jong-Cheon
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.12
    • /
    • pp.36-43
    • /
    • 2021
  • In this study, the optimal feed system and process conditions that can simultaneously minimize each warpage occurring in the two shape features of the 2P Header HSG, a connector part for automobiles, were determined through injection molding simulation analysis. First, we defined each warping deformation of the two features geometrically and quantified them approximately using the injection molding simulation data. For design optimization, a full factorial experiment was conducted considering the feed system, resin temperature, and packing pressure as design variables, and a follow-up experiment was conducted based on the analysis of the average warpage. In this study, an optimal design was generated considering both the warpage result and resin-saving effect. In the optimal design, the warpages of the two shape features were predicted to be 0.18 and 0.29 mm, and these warpages were found to meet the allowable limit of warpage, which is 0.3 mm, for part assembly.

3-Dimensional Building Reconstruction with Airborne LiDAR Data

  • Lee, Dong-Cheon;Yom, Jae-Hong;Kwon, Jay-Hyoun;We, Gwang-Jae
    • Korean Journal of Geomatics
    • /
    • v.2 no.2
    • /
    • pp.123-130
    • /
    • 2002
  • LiDAR (Light Detection And Ranging) system has a profound impact on geoinformatics. The laser mapping system is now recognized as being a viable system to produce the digital surface model rapidly and efficiently. Indeed the number of its applications and users has grown at a surprising rate in recent years. Interest is now focused on the reconstruction of buildings in urban areas from LiDAR data. Although with present technology objects can be extracted and reconstructed automatically using LiDAR data, the quality issue of the results is still major concern in terms of geometric accuracy. It would be enormously beneficial to the geoinformatics industry if geometrically accurate modeling of topographic surface including man-made objects could be produced automatically. The objectives of this study are to reconstruct buildings using airborne LiDAR data and to evaluate accuracy of the result. In these regards, firstly systematic errors involved with ALS (Airborne Laser Scanning) system are introduced. Secondly, the overall LiDAR data quality was estimated based on the ground check points, then classifying the laser points was performed. In this study, buildings were reconstructed from the classified as building laser point clouds. The most likely planar surfaces were estimated by the least-square method using the laser points classified as being planes. Intersecting lines of the planes were then computed and these were defined as the building boundaries. Finally, quality of the reconstructed building was evaluated.

  • PDF

Topographic Normalization of Satellite Synthetic Aperture Radar(SAR) Imagery (인공위성 레이더(SAR) 영상자료에 있어서 지형효과 저감을 위한 방사보정)

  • 이규성
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.1
    • /
    • pp.57-73
    • /
    • 1997
  • This paper is related to the correction of radiometric distortions induced by topographic relief. RADARSAT SAR image data were obtained over the mountainous area near southern part of Seoul. Initially, the SAR data was geometrically corrected and registered to plane rectangular coordinates so that each pixel of the SAR image has known topographic parameters. The topographic parameters (slope and aspect) at each pixel position were calculated from the digital elevation model (DEM) data having a comparable spatial resolution with the SAR data. Local incidence angle between the incoming microwave and the surface normal to terrain slope was selected as a primary geometric factor to analyze and to correct the radiometric distortions. Using digital maps of forest stands, several fields of rather homogeneous forest stands were delineated over the SAR image. Once the effects of local incidence angle on the radar backscatter were defined, the radiometric correction was performed by an empirical fuction that was derived from the relationship between the geometric parameters and mean radar backscatter. The correction effects were examined by ground truth data.

A Study on Neutron Resonance Energy of 180Ta below 1eV Energy (1 eV 이하 에너지 영역에서의 180Ta 동위원소의 중성자공명에 대한 연구)

  • Lee, Samyol
    • Journal of the Korean Society of Radiology
    • /
    • v.8 no.6
    • /
    • pp.287-292
    • /
    • 2014
  • In this study, the neutron capture cross section of $^{180}Ta$(natural existence ratio: 0.012 %) obtain by measuring has been compared with the evaluated data for the capture data. In generally, the neutron capture resonance is defined as Breit-Wigner formula. The formula consists of the resonance parameters such as neutron width, total width and neutron width. However in the case of $^{180}Ta$, these are very poor experimental neutron capture cross section data and resonance information in below 10 eV. Therefore, in the study, we analyzed the neutron resonance of $^{180}Ta$ with the measuring the prompt gamma-ray from the sample. And the resonance was compared with the evaluated data by Mughabghab, ENDF/B-VII, JEFF-3.1 and TENDL 2012. Neutron sources from photonuclear reaction with 46-MeV electron linear accelerator at Research Reactor Institute, Kyoto University used for cross section measurement of $^{180}Ta(n,{\gamma})^{181}Ta$ reaction. $BGO(Bi_4Ge_3O_{12})$ scintillation detectors used for measurement of the prompt gamma ray from the $^{180}Ta(n,{\gamma})^{181}Ta$ reaction. The BGO spectrometer was composed geometrically as total energy absorption detector.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.