• Title/Summary/Keyword: advanced benchmark

Search Result 154, Processing Time 0.022 seconds

Biaxial Buckling Analysis of Magneto-Electro-Elastic(MEE) Nano Plates using the Nonlocal Elastic Theory (비국소 탄성이론을 이용한 자기-전기-탄성 나노 판의 2방향 좌굴 해석)

  • Han, Sung-Cheon;Park, Weon-Tae
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.5
    • /
    • pp.405-413
    • /
    • 2017
  • In this paper, we study the biaxial buckling analysis of nonlocal MEE(magneto-electro-elastic) nano plates based on the first-order shear deformation theory. The in-plane electric and magnetic fields can be ignored for MEE(magneto-electro-elastic) nano plates. According to magneto-electric boundary condition and Maxwell equation, the variation of magnetic and electric potentials along the thickness direction of the MME plate is determined. In order to reformulate the elastic theory of MEE(magneto-electro-elastic) nano-plate, the nonlocal differential constitutive relations of Eringen is used. Using the variational principle, the governing equations of the nonlocal theory are discussed. The relations between nonlocal and local theories are investigated by computational results. Also, the effects of nonlocal parameters, in-plane load directions, and aspect ratio on structural responses are studied. Computational results show the effects of the electric and magnetic potentials. These computational results can be useful in the design and analysis of advanced structures constructed from MEE(magneto-electro-elastic) materials and may be the benchmark test for the future study.

Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation (효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블)

  • Sungho Song;Kyungmin Park;Incheol Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.335-347
    • /
    • 2024
  • Open-vocabulary 3D point cloud instance segmentation (OV-3DIS) is a challenging visual task to segment a 3D scene point cloud into object instances of both base and novel classes. In this paper, we propose a novel model Open3DME for OV-3DIS to address important design issues and overcome limitations of the existing approaches. First, in order to improve the quality of class-agnostic 3D masks, our model makes use of T3DIS, an advanced Transformer-based 3D point cloud instance segmentation model, as mask proposal module. Second, in order to obtain semantically text-aligned visual features of each point cloud segment, our model extracts both 2D and 3D features from the point cloud and the corresponding multi-view RGB images by using pretrained CLIP and OpenSeg encoders respectively. Last, to effectively make use of both 2D and 3D visual features of each point cloud segment during label assignment, our model adopts a unique feature ensemble method. To validate our model, we conducted both quantitative and qualitative experiments on ScanNet-V2 benchmark dataset, demonstrating significant performance gains.

A practical analysis approach to the functional requirements standards for electronic records management system (기록관리시스템 기능요건 표준의 실무적 해석)

  • Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.18
    • /
    • pp.139-178
    • /
    • 2008
  • The functional requirements standards for electronic records management systems which have been published recently describe the specifications very precisely including not only core functions of records management but also the function of system management and optional modules. The fact that these functional requirements standards seem to be similar to each other in terms of the content of functions described in the standards is linked to the global standardization trends in the practical area of electronic records. In addition, these functional requirements standards which have been built upon with collaboration of archivists from many national archives, IT specialists, consultants and records management applications vendors result in not only obtaining high quality but also establishing the condition that the standards could be the certificate criteria easily. Though there might be a lot of different ways and approaches to benchmark the functional requirements standards developed from advanced electronic records management practice, this paper is showing the possibility and meaningful business cases of gaining useful practical ideas learned from imaging electronic records management practices related to the functional requirements standards. The business cases are explored central functions of records management and the intellectual control of the records such as classification scheme or disposal schedules. The first example is related to the classification scheme. Should the records classification be fixed at same number of level? Should a record item be filed only at the last node of classification scheme? The second example addresses a precise disposition schedule which is able to impose the event-driven chronological retention period to records and which could be operated using a inheritance concept between the parent nodes and child nodes in classification scheme. The third example shows the usage of the function which holds or freeze and release the records required to keep as evidence to comply with compliance like e-Discovery or the risk management of organizations under the premise that the records management should be the basis for the legal compliance. The last case shows some cases for bulk batch operation required if the records manager can use the ERMS as their useful tool. It is needed that the records managers are able to understand and interpret the specifications of functional requirements standards for ERMS in the practical view point, and to review the standards and extract required specifications for upgrading their own ERMS. The National Archives of Korea should provide various stakeholders with a sound basis for them to implement effective and efficient electronic records management practices through expanding the usage scope of the functional requirements standard for ERMS and making the common understanding about its implications.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.