• Title/Summary/Keyword: Multi-dimensional system

Search Result 848, Processing Time 0.026 seconds

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

Development of Two-dimensional Prompt-gamma Measurement System for Verification of Proton Dose Distribution (이차원 양성자 선량 분포 확인을 위한 즉발감마선 이차원분포 측정 장치 개발)

  • Park, Jong Hoon;Lee, Han Rim;Kim, Chan Hyeong;Kim, Sung Hun;Kim, Seonghoon;Lee, Se Byeong
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.42-51
    • /
    • 2015
  • In proton therapy, verification of proton dose distribution is important to treat cancer precisely and to enhance patients' safety. To verify proton dose distribution, in a previous study, our team incorporated a vertically-aligned one-dimensional array detection system. We measured 2D prompt-gamma distribution moving the developed detection system in the longitudinal direction and verified similarity between 2D prompt-gamma distribution and 2D proton dose distribution. In the present, we have developed two-dimension prompt-gamma measurement system consisted of a 2D parallel-hole collimator, 2D array-type NaI(Tl) scintillators, and multi-anode PMT (MA-PMT) to measure 2D prompt-gamma distribution in real time. The developed measurement system was tested with $^{22}Na$ (0.511 and 1.275 MeV) and $^{137}Cs$ (0.662 MeV) gamma sources, and the energy resolutions of 0.511, 0.662 and 1.275 MeV were $10.9%{\pm}0.23p%$, $9.8%{\pm}0.18p%$ and $6.4%{\pm}0.24p%$, respectively. Further, the energy resolution of the high gamma energy (3.416 MeV) of double escape peak from Am-Be source was $11.4%{\pm}3.6p%$. To estimate the performance of the developed measurement system, we measured 2D prompt-gamma distribution generated by PMMA phantom irradiated with 45 MeV proton beam of 0.5 nA. As a result of comparing a EBT film result, 2D prompt-gamma distribution measured for $9{\times}10^9$ protons is similar to 2D proton dose distribution. In addition, the 45 MeV estimated beam range by profile distribution of 2D prompt gamma distribution was $17.0{\pm}0.4mm$ and was intimately related with the proton beam range of 17.4 mm.

Assessment of Parallel Computing Performance of Agisoft Metashape for Orthomosaic Generation (정사모자이크 제작을 위한 Agisoft Metashape의 병렬처리 성능 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.427-434
    • /
    • 2019
  • In the present study, we assessed the parallel computing performance of Agisoft Metashape for orthomosaic generation, which can implement aerial triangulation, generate a three-dimensional point cloud, and make an orthomosaic based on SfM (Structure from Motion) technology. Due to the nature of SfM, most of the time is spent on Align photos, which runs as a relative orientation, and Build dense cloud, which generates a three-dimensional point cloud. Metashape can parallelize the two processes by using multi-cores of CPU (Central Processing Unit) and GPU (Graphics Processing Unit). An orthomosaic was created from large UAV (Unmanned Aerial Vehicle) images by six conditions combined by three parallel methods (CPU only, GPU only, and CPU + GPU) and two operating systems (Windows and Linux). To assess the consistency of the results of the conditions, RMSE (Root Mean Square Error) of aerial triangulation was measured using ground control points which were automatically detected on the images without human intervention. The results of orthomosaic generation from 521 UAV images of 42.2 million pixels showed that the combination of CPU and GPU showed the best performance using the present system, and Linux showed better performance than Windows in all conditions. However, the RMSE values of aerial triangulation revealed a slight difference within an error range among the combinations. Therefore, Metashape seems to leave things to be desired so that the consistency is obtained regardless of parallel methods and operating systems.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

A Study on a Moving Adaptive Grid Generation Method Using a Level-set Scheme (레벨셋법을 이용한 이동 집중격자 생성법에 대한 연구)

  • Il-Ryong Park;Ho-Hwan Chun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.39 no.3
    • /
    • pp.18-27
    • /
    • 2002
  • In order to improve the accuracy of the solution near the boundary in an analysis of viscous flow around an arbitrary boundary which move and be deformed using an Eulerian concept, a level-set based grid deformation method is introduced to concentrate grid points near the boundary. This paper presents a new monitor function which can easily control the level of the concentration of grid points along the boundary. Computations for steady flow around a semi-circular cylinder mounted on the bottom of the flow domain were carried out to check the improvement of the solution using the adaptive grid system with an immersed boundary method. The present numerical results show a good agreement with the solutions obtained by a body fitted grid system and more accurate solutions than those computed with non-adaptive grid system. For the validation of mechanical usefulness of the present method, an expanded analysis of flow around multi-body fixed in the flow domain was carried out. Finally, the present moving adaptive grid method was applied to a two-dimensional bubble rise problem. The computed results show well adapted grid points around the boundary of the bubble at every time and a good agreement with the result calculated by fixed grid system.

Spatial OLAP Implementation for GIS Decision-Making - With emphasis on Urban Planning - (GIS 의사결정을 지원하기 위한 Spatial OLAP 구현 - 도시계획을 중심으로 -)

  • Kyung, Min-Ju;Yom, Jae-Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.6
    • /
    • pp.689-698
    • /
    • 2009
  • SOLAP system integrates and complements the functions of both OLAP and GIS systems. This enables users not only to easily access geospatial data but also to analyze and extract information for decision making. In this study a SOLAP system was designed and implemented to provide urban planners with GIS information when making urban planning decisions. Rapid urbanization in Korea has brought about ill-balanced urban structure as the result of development without detailed analysis of urban plans. Systematic urban planning procedures and automated systems are crucial for detail analysis of future development plans. Data regarding the development regulations and current status of land use need to be assessed precisely and instantly. Multi-dimensional aspects of a suggested plan must be formulated instantly and examined thoroughly using 'what if' scenarios to come up with a best possible plan. The SOLAP system presented in this study designed the dimension tables and the fact tables for supplying timely geospatial information to the planners when making decisions regarding urban planning. The database was implemented using open source DBMS and was populated with necessary attribute data which was freely available from the Statistics Korea bureau homepage. It is anticipated the SOLAP system presented in this study will contribute to better urban planning decisions in Korea through more timely and accurate provision of geospatial information.

A Study on the Use of Grid-based Spatial Information for Response to Typhoons (태풍대응을 위한 격자 기반 공간정보 활용방안 연구)

  • Hwang, Byungju;Lee, Junwoo;Kim, Dongeun;Kim, Jangwook
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.1
    • /
    • pp.25-38
    • /
    • 2021
  • Purpose: To reduce the damage caused by continuously occurring typhoons, we proposed a standardized grid so that it could be actively utilized in the prevention and preparation stage of typhoon response. We established grid-based convergence information on the typhoon risk area so that we showed the effectiveness of information used in disaster response. Method: To generate convergent information on typhoon hazard areas that can be useful in responding to typhoon situation, we used various types of data such as vector and raster to establish typhoon hazard area small grid-based information. A standardized grid model was applied for compatibility with already produced information and for compatibility of grid information generated by each local government. Result: By applying the grid system of National branch license plates, a grid of typhoon risk areas in Seoul was constructed that can be usefully used when responding to typhoon situations. The grid system of National branch license plates defines the grid size of a multi-dimensional hierarchical structure. And a grid of typhoon risk areas in Seoul was constructed using grids of 100m and 1,000m. Conclusion: Using real-time 5km resolution grid based weather information provided by Korea Meteorological Administration, in the future, it is possible to derive near-future typhoon hazard areas according to typhoon travel route prediction. In addition, the national branch number grid system can be expanded to global grid systems for global response to various disasters.

Prototype Design and Development of Online Recruitment System Based on Social Media and Video Interview Analysis (소셜미디어 및 면접 영상 분석 기반 온라인 채용지원시스템 프로토타입 설계 및 구현)

  • Cho, Jinhyung;Kang, Hwansoo;Yoo, Woochang;Park, Kyutae
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.203-209
    • /
    • 2021
  • In this study, a prototype design model was proposed for developing an online recruitment system through multi-dimensional data crawling and social media analysis, and validates text information and video interview in job application process. This study includes a comparative analysis process through text mining to verify the authenticity of job application paperwork and to effectively hire and allocate workers based on the potential job capability. Based on the prototype system, we conducted performance tests and analyzed the result for key performance indicators such as text mining accuracy and interview STT(speech to text) function recognition rate. If commercialized based on design specifications and prototype development results derived from this study, it may be expected to be utilized as the intelligent online recruitment system technology required in the public and private recruitment markets in the future.

A study on specializing the University Museum in the Perspective of Culture, Arts, and Science (문화.예술.과학의 관점에서 대학박물관의 특성화를 위한 기초연구)

  • Choe, Jong-Ho
    • KOMUNHWA
    • /
    • no.68
    • /
    • pp.25-39
    • /
    • 2006
  • This article attempts to define identity, role and functions of a university museum and to suggest specialization of the university museum in the perspective of culture, arts, and science. A university museum is defined as a center for the service of the university community and its development which acquires, researches, communicates, exhibits and educates, for purposes of eduinfotainment,29 material evidence of people and their environment. The target user of the today's university museum are not only professors, students, university workers, but also university neighbourhood such as the related professionals, patrons, parents, school children and villagers. A multi-dimensional and multi-purpose university museum can be established and managed in a real world and / or a cyber world in the perspective of culture, arts, and science. Based on a ubiquitous system30 in a cyber world vis-a-vie a real world, the university museum can be easily utilized by users anywhere, anytime and any device. In order to specialize the university museum in the perspective of culture, arts and science, it is desirable that the university museum director with the CEO of the university community promote the specialization of the university museum based on philosophy and strategies of university community management after they definitely evaluate the components and resources of the university museum such as human powers, museum collections, organizational, technological, capital, spacial and symbolic resources, The specialization of the university museum should be projected and executed in the direction of maintaining the typical scope of museum activities and managing the effective museum management. Specializing the university museum in the perspective of culture, arts, and science can contribute not only to establish the identity of the university community and to perform role and functions of the university museum but also to encourage academic development, to revaluate the brand of the university community and to promote the marketing for the university.

  • PDF

A study on the connected-digit recognition using MLP-VQ and Weighted DHMM (MLP-VQ와 가중 DHMM을 이용한 연결 숫자음 인식에 관한 연구)

  • Chung, Kwang-Woo;Hong, Kwang-Seok
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.96-105
    • /
    • 1998
  • The aim of this paper is to propose the method of WDHMM(Weighted DHMM), using the MLP-VQ for the improvement of speaker-independent connect-digit recognition system. MLP neural-network output distribution shows a probability distribution that presents the degree of similarity between each pattern by the non-linear mapping among the input patterns and learning patterns. MLP-VQ is proposed in this paper. It generates codewords by using the output node index which can reach the highest level within MLP neural-network output distribution. Different from the old VQ, the true characteristics of this new MLP-VQ lie in that the degree of similarity between present input patterns and each learned class pattern could be reflected for the recognition model. WDHMM is also proposed. It can use the MLP neural-network output distribution as the way of weighing the symbol generation probability of DHMMs. This newly-suggested method could shorten the time of HMM parameter estimation and recognition. The reason is that it is not necessary to regard symbol generation probability as multi-dimensional normal distribution, as opposed to the old SCHMM. This could also improve the recognition ability by 14.7% higher than DHMM, owing to the increase of small caculation amount. Because it can reflect phone class relations to the recognition model. The result of my research shows that speaker-independent connected-digit recognition, using MLP-VQ and WDHMM, is 84.22%.

  • PDF