• Title/Summary/Keyword: 레인지 데이터

Search Result 566, Processing Time 0.03 seconds

The Revision of Motion Capture Data using Multiple Layers (다중 레이어를 이용한 모션캡쳐 수정에 관한 연구)

  • Kim, Ki-Hong;Choi, Chul-Young;Chae, Eel-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.7
    • /
    • pp.903-912
    • /
    • 2009
  • There are still many difficulties in developing techniques for modifying or transforming the flickering of motion capture data or in modifying motion capture data in such a way that suits the animation timing sheet. There is a problem in the existing method of modifying motion capture data. It requires almost same time as in the key frame animation work by a very skilled animator or even more time in modifying. It is believed that this kind of problem can be a basis for a more effective problem-solving method through creating the key animation data node and direct blend layer and replacement layer nodes. This study presents a new method which enables to modify animation data in a nonlinear way without modifying the existing animation data by creating an animation layer node for a direct connection to the animation node. 'Maya' API will be utilized in order to realize this method and the research range will be limited to 'Maya' 3D software which is generally used in motion picture and animation films. According to the results of this study, the new method is much more intuitive than the nonlinear one and does not require the preceding working of making animation clips. In addition, it has enabled to modify flickering and to extract key frames, and due to the compatibility with other programs, it has been possible to modify motion capture data by creating a direct layer node. Finally, in this study, the existing method of modifying animation will be examined, compared and analyzed.

  • PDF

CWM Based Metadata Repository Design and Implementation (CWM 기반의 메타데이터 레파지토리 설계 및 구현)

  • Baik, Woon-jib;Lim, Jung-eun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.77-80
    • /
    • 2004
  • 데이터 웨어하우스가 발전함에 따라 통합된 메타데이터는 구현뿐만 아니라 활용측면에서도 중요성이 부각되면서 이제는 전략적인 비즈니스 자산으로 여겨지고 있다. 이러한 메타데이터 표준으로 OMG(Object Management Group)에서는 CWM(Common Warehouse MetaModel)을 웨어하우스와 BI(Business intelligence)의 표준으로 채택하였다. 그러나 소프트웨어 개발 업체들간의 메타데이터 상호교환 중심으로 구현됨으로써 CWM을 사용한 메타데이터의 활용 및 저변확대가 안되고 있다. 이러한 문제점을 개선하기 위해서 CWM을 기반으로 한 레파지토리(Repository)를 설계 및 구현함으로써 CWM로 생성된 메타데이터를 저장, 보관하여 비즈니스적인 활용이 가능하도록 하였다. 또한 이러한 연구를 통하여 데이터 웨어하우스 분야에서도 MDA(Model Driven Architecture)기반의 설계 및 구현이 될 것으로 전망된다.

  • PDF

A Design for XMDR Search System Using the Meta-Topic Map (메타-토픽맵을 이용한 XMDR 검색 시스템 설계)

  • Heo, Uk;Hwang, Chi-Gon;Jung, Kye-Dong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1637-1646
    • /
    • 2009
  • Recently many researchers have been studying various methods for data integration. Among the integration methods that the researchers have studied, there are a method using metadata repository, and Topic Map which identifies the relationships between the data. This study suggests Meta-Topic Map to create Topic Map about search keyword by applying metadata and Topic Map, and the XMDR as a way to connect Meta-Topic Map with metadata in the legacy system. Considering the semantic relationship of user's keyword in the legacy system, the Meta-Topic Map provides the Topic Map format and generates the Topic Map about user's keyword. The XMDR performs structural integration through solving the problem of heterogeneity among metadata in the legacy system. The suggested svides isproves the interoperability among existing Relational Database constructed in the legacy system and the search efficiency and is efficient in expanding the system.

Development of A HealthcareData ETL Tool Based on OMOP CDM (OMOP CDM 기반 의료 데이터 ETL 툴 개발)

  • Man-Uk Han;Pureum Lee;Ho-Woong Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1224-1225
    • /
    • 2023
  • 디지털 헬스케어 서비스 활성화에 따라 디지털 의료 데이터의 양은 매년 급속하게 증가하고 있으며, 의 데이터의 상호 교환과 연동을 위한 다양한 CDM(Common Data Model)이 개발되고 있다. 그러나, 의료 데이터 교류에 대한 요구가 증가하면서, 기존 레거시 시스템의 데이터를 CDM으로 변환하기 위한 추가적인 비용이 소요될 수 밖에 없다. 이에 본 연구에서는 OMOP CDM (Observational Medial Outcomes Partnership Common DataModel) 기반 의료 데이터 ETL (Extract, Transform, Load) 툴을 개발하였다. OMOP CDM ETL 툴은 기존의 레거시 데이터베이스 정보를 CDM으로 변환할 수 있는 효과적인 료인터페이스를 제공함으로써, 디지털 의료 데이터 공유와 관리 및 분석의 효율성을 증대할 수 있을 것이다.

Domain Searching method using DCT-coefficient for Fractal Image Compression (Fractal 압축방법을 위한 DCT 계수를 사용한 도메인 탐색 방법)

  • Suh, Ki-Bum;Chong, Jong-Wha
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.2
    • /
    • pp.28-38
    • /
    • 2000
  • This paper proposes a fractal compression method using the domain classification and local searching, which utilize DCT coefficient characteristic Generally, the fractal Image encoding method has a time consuming process to search a domain to be matched with range block In order to reduce computation complexity, the domain and range regions are respectively classified into 4 category by using the characteristics of DCT coefficients and each range region is encoded by a method suitable for the property of its category Since the bit amount of the compressed image depends on the number of range blocks, the matching of domain block and range block is induced on the large range block by using local search, so that compression ratio is increased by reducing the number of range block In the local search, the searching complexity is reduced by determining the direction and distance of searching using the characteristics of DCT coefficients The experimental results shows that the proposed algorithm have 1 dB higher PSNR and 0 806 higher compression ratio than previous algorithm.

  • PDF

A Design and Implementation of A Profile Reporting Viewer for Embedded Softwares (임베디드 소프트웨어를 위한 프로파일 레포팅 뷰어의 설계 및 구현)

  • Ko BangWon;Shin KyoungHo;Kim SangHeon;Yoo CheaWoo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.583-585
    • /
    • 2005
  • 본 논문은 임베디드 소프트웨어 개발자가 임베디드 소프트웨어 개발시 쉽고 편리하게 테스트 및 프로파일의 결과를 분석하고 개발의 효율성을 높일 수 있도록 직관적인 GUI를 가지는 레포팅 뷰어를 설계 및 구현한다. 제안하는 레포팅 뷰어는 프로파일 결과 테이터 처리기와 GUI 레포트 생성기로 구성된다. 결과 데이터 처리기는 임베디드 소프트웨어의 성능 프로파일링을 통해 생성된 문자 스트링 형태의 저수준 결과를 XML 문서로 구조화 하여 객체 형태의 API를 제공한다. 레포트 생성기는 결과 데이터 처리기에 의해 생성된 API 객체를 이용하여 다양한 그래픽 기반 프로파일 레포트 뷰를 출력한다. 사용자는 제안하는 레포팅 뷰어가 제공하는 객체 형태의 API를 통해 자신이 원하는 프로파일 레포트 화면을 구성할 수 있기 때문에 기존 소프트웨어보다 더욱 다양하고 직관적인 레포트 뷰(view)를 생성할 수 있다. 따라서 사용자는 보다 빠르고 다양한 방법으로 성능 분석과 코드 수정이 가능하여 효율적이고 신뢰성 있는 임베디드 소프트웨어를 개발할 수 있다.

  • PDF

A Design of Collaborative Agent for Data Grid MiddleWare using XMDR (XMDR을 이용한 데이터 그리드 미들웨어의 협력 에이전트 설계)

  • Noh, Seon-Taek;Moon, S.J.;Eum, Y.H.;Kook, Y.G.;Jung, G.D.;Choi, Y.G.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10a
    • /
    • pp.557-562
    • /
    • 2006
  • 최근 기업환경에서는 분산되어 있는 정보를 통합하여 정보 공유의 필요성이 증가함에 따라 기존 레거시 시스템간의 협업을 하기 위한 상호 운용이 강조되고 있다. 독립적인 레거시 시스템의 상호 연결을 위해서는 플랫폼 이질성, 의미 이질성 등을 극복할 필요가 있다. 이러한 문제를 해결하기 위해 ISO/IEC 11179에서 진행하고 있는 XMDR을 이용하여 미들웨어를 설계하였다. 설계한 미들웨어를 레거시 시스템에 적용하여 데이터 공유 및 통합의 일관성을 유지할 수 있게 되었다. 하지만 설계된 미들웨어는 각 노드의 자원 상황과 작업 상황에 대한 조정기능이 없기 때문에 정보 활용의 효율성을 보장할 수 없다. 따라서 레거시 시스템을 관리하고 조정하는 방안이 필요하다. 본 논문에서는 정보를 요청하는 요청 에이전트와 정보를 제공하는 정보 에이전트간의 정확한 정보 교환을 할 수 있도록 조정하고, 각 레거시 시스템의 정보 모니터링과 작업 분배 및 로컬 노드의 자원 관리를 담당하는 협력 에이전트를 설계함으로써 통합된 정보를 효율적으로 활용할 수 있도록 한다.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

The Design and Implementation of Adaptive Broker using Multi-Layer Data (멀티-레이어 데이터를 이용한 적응적 브로커 설계 및 구현)

  • 김은영;박중길;권택근
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.377-380
    • /
    • 2002
  • 인터넷 기반의 VOD(Video On Demand) 서비스 시스템이 사용자에게 QoS (Quality of Service)를 보장하기 위한 방법으로 네트워크 흐름제어 기능을 갖춘 브로커를 개발하고자 한다. 네트워크 상태에 따라 흐름제어를 하는 브로커는 멀티-레이어 데이터를 이용하여 VOD 서비스를 제공한다. 이로써 사용자는 최소한의 버퍼와 버퍼 교환 알고리즘에 따른 복잡도를 줄일 수 있다는 장점을 가진다. 본 논문에서는 네트워크 상태를 파악하여 VOD 서비스를 제공할 수 있는 브로커를 설계하고 구현 결과에 대해 기술하겠다.

  • PDF