• Title/Summary/Keyword: data structuring

Search Result 148, Processing Time 0.122 seconds

A New Focus Measure Method Based on Mathematical Morphology for 3D Shape Recovery (3차원 형상 복원을 위한 수학적 모폴로지 기반의 초점 측도 기법)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.1
    • /
    • pp.23-28
    • /
    • 2017
  • Shape from focus (SFF) is a technique used to reconstruct 3D shape of objects from a sequence of images obtained at different focus settings of the lens. In this paper, a new shape from focus method for 3D reconstruction of microscopic objects is described, which is based on gradient operator in Mathematical Morphology. Conventionally, in SFF methods, a single focus measure is used for measuring the focus quality. Due to the complex shape and texture of microscopic objects, single measure based operators are not sufficient, so we propose morphological operators with multi-structuring elements for computing the focus values. Finally, an optimal focus measure is obtained by combining the response of all focus measures. The experimental results showed that the proposed algorithm has provided more accurate depth maps than the existing methods in terms of three-dimensional shape recovery.

An Efficient Anchor Range Extracting Algorithm for The Unit Structuring of News Data (뉴스 정보의 단위 구조화를 위한 효율적인 앵커구간 추출 알고리즘)

  • 전승철;박성한
    • Journal of Broadcast Engineering
    • /
    • v.6 no.3
    • /
    • pp.260-269
    • /
    • 2001
  • This paper proposes an efficient algorithm extracting anchor ranges that exist in news video for the unit structuring of news. To this purpose, this paper uses anchors face in the frame rather than the cuts where the scene changes are occurred. In anchor range, we find the end position (frame) of anchor range with the FRFD(Face Region Frame Difference). On the other hand, in not-anchor range, we find the start position of anchor range by extracting anchors face. The process of extracting anchors face is consists of two parts to enhance the computation time for WPEG decoding. The first pact is to find candidates of anchors face through rough analysis with partial decoding MPEG and the second part is to verify candidates of anchors face with fully decoding. It is possible to use the result of this process in basic step of news analysis. Especially, the fast processing and the high recall rate of this process are suitable to apply for the real news service.

  • PDF

An Architecture of Vector Processor Concept using Dimensional Counting Mechanism of Structured Data (구조성 데이터의 입체식 계수기법에 의한 벡터 처리개념의 설계)

  • Jo, Yeong-Il;Park, Jang-Chun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.167-180
    • /
    • 1996
  • In the scalar processing oriented machine scalar operations must be performed for the vector processing as many as the number of vector components. So called a vector processing mechanism by the von Neumann operational principle. Accessing vector data hasto beperformed by theevery pointing ofthe instruction or by the address calculation of the ALU, because there is only a program counter(PC) for the sequential counting of the instructions as a memory accessing device. It should be here proposed that an access unit dimensionally to address components has to be designed for the compensation of the organizational hardware defect of the conventional concept. The necessity for the vector structuring has to be implemented in the instruction set and be performed in the mid of the accessing data memory overlapped externally to the data processing unit at the same time.

  • PDF

A Study on Transactional Analysis and Job Satisfaction Using Pattern Analysis (패턴분석을 이용한 교류분석이론과 직무만족에 관한 연구)

  • Kim, Jong-Ho;Hyun, Mi-Sook;Hwang, Seung-Gook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.4
    • /
    • pp.526-533
    • /
    • 2007
  • In this paper, we study to the pattern of job satisfaction using four theories of transactional analysis-egogram, life positions, strokes, time structuring-for organizational members. The tool of pattern analysis is used fuzzy TAM network which Is especially effective for pattern analysis. The input data of fuzzy TAM network ate values of four theories in transactional analysis, the output data is the classes which is divided by two groups from score of job satisfaction. From the result of this study, the correct rates of training data and checking data are 85-100% and 60%, respectively.

Implementation and Performance Analysis of Event Processing and Buffer Managing Techniques for DDS (고성능 데이터 발간/구독 미들웨어의 이벤트, 버퍼 처리 기술 및 성능 분석)

  • Yoon, Gunjae;Choi, Hoon
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.449-459
    • /
    • 2017
  • Data Distribution Service (DDS) is a communication middleware that supports a flexible, scalable and real-time communication capability. This paper describes several techniques to improve the performance of DDS middleware. Detailed events for the internal behavior of the middleware are defined. A DDS message is disassembled into several submessages of independent, meaningful units for event-driven structuring in order to reduce the processing complexity. The proposed technique of history cache management is also described. It utilizes the fact that status access and random access to the history cache occur more frequently in the DDS. These methods have been implemented in the EchoDDS, the DDS implementation developed by our team, and it showed improved performance.

A Study on the Health Perceptions and Health Behaviors in Nursing Students (간호대학생들의 건강지각과 건강행위에 관한 연구)

  • Lee Ock Suk;Suh In Sun
    • Journal of Korean Public Health Nursing
    • /
    • v.11 no.1
    • /
    • pp.39-50
    • /
    • 1997
  • This study was designed to identify the relationship between health perception and health behavior in nursing students and provide basic data for structuring the strategies of health promotion. The targets in this study were the 191 nursing students in nursing department of one national university in Chonju city. The data were collected during the period from 10 to 25 in Nov. 1995 by means of a structured questionnaire. Health perception was measured by the health perception questionnaire developed by Ware and translated by You. Health behavior was measured by health promotion questionnaire developed by Cho. The data were analyzed by descriptive statistics, t-test, ANOVA and Pearson correlation using the $SPSS-PC^+$ program. The results of this study were as follows; 1. The mean health perception score of the subjects was 3.21; the level of health perception was relatively high. 2. The mean health behavior score of the subjects was 3.61; the level of health behavior was relatively high. 3. When health perception and health behavior was analyzed by Pearson correlation., it was found that the higher the degree of health perception, the better the reported health behavior(r=.1463, p=.022). 4. General characteristics related to health perception were attitude and school life(p<0.05). General characteristics related to health behavior were degree, religion, attitude and school life(p<0.05).

  • PDF

Structuring of BOM and Routings for CIM System In Make to Order Environments -Application of CIM System for Ship Production- (수주생산 환경에서의 CIM 시스템을 위한 BOM과 라우팅의 구조화 -조선산업 사례 중심-)

  • Hwang, Sung-Ryong;Kim, Jae-Gyun
    • IE interfaces
    • /
    • v.15 no.1
    • /
    • pp.26-39
    • /
    • 2002
  • Two key data areas of the integrated production database in computer-integrated manufacturing (CIM) systems are the product structure in the forms of bills of material(BOM) and the process structure in the forms of routings. The great majority of existing information systems regard the BOM and routing as two separate data entities, possibly with some degree of cross-referencing. This paper proposes new information structure called the bills of material and routings(BMR) that logically integrates the BOM and routings for the CIM systems in ship production. The characteristics of ship production are described as: 1) make-to-order production type, 2) combined manufacturing principles (workshop production and construction site production), 3) significant overlapping of design, planning and manufacturing, 4) very long order throughput time, 5) complex product structure and production process. The proposed BMR systematically manages ail parts and operations data needed ship production considering characteristics of ship production. Also, the BMR situated on the integrated production database more efficiently supports interface between engineering and production functions, and integrates a wide variety of functions within production such as production planning, process planning, operation scheduling, material planning, costing etc., and simplifies information flow between sub-systems in CIM systems.

Implementation of Sensor Network Monitoring System with Energy Efficiency Constraints (에너지 효율 제약조건을 가진 센서 네트워크 모니터링 시스템 구현)

  • Lee, Ki-Wook;Seong, Chang-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.1
    • /
    • pp.10-16
    • /
    • 2010
  • As the study of ubiquitous computing environment has been very active in recent years, the senor network technology is considered to be a core technology of it. This wireless sensor network is enabled to sense and gather data of interest from its surroundings by sensor nodes applied in physical space. Each sensor node structuring the sensor network is demanded to execute the required service using limited resources. This limited usage of resources requires the sensor node to energy-efficiently perform in building wireless sensor network, which enables to extend the entire network life. This study structures a system able to monitor changing environment data on a real-time basis using a computer remotely as it energy-efficiently gathers and sends environment data of specific areas.

Development of Fuzzy Inference Mechanism for Intelligent Data and Information Processing (지능적 정보처리를 위한 퍼지추론기관의 구축)

  • 송영배
    • Spatial Information Research
    • /
    • v.7 no.2
    • /
    • pp.191-207
    • /
    • 1999
  • Data and information necessary for solving the spatial decision making problems are imperfect or inaccurate and most are described by natural language. In order to process these arts of information by the computer, the obscure linguistic value need to be described quantitatively to let and computer understand natural language used by humans. For this , the fuzzy set theory and the fuzzy logic are used representative methodology. So this paper describes the construction of the language model by the natural language that user easily can understand and the logical concepts and construction process for building the fuzzy inference mechanism. It makes possible to solve the space related decision making problems intellectually through structuring and inference used by the computer, in case of the evaluation concern or decision making problems are described inaccurate, based on the inaccurate or indistinct data and information.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.