• Title/Summary/Keyword: 컴퓨터 지원설계

Search Result 909, Processing Time 0.031 seconds

Real-Time Stereoscopic Visualization of Very Large Volume Data on CAVE (CAVE상에서의 방대한 볼륨 데이타의 실시간 입체 영상 가시화)

  • 임무진;이중연;조민수;이상산;임인성
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.679-691
    • /
    • 2002
  • Volume visualization is an important subarea of scientific visualization, and is concerned with techniques that are effectively used in generating meaningful and visual information from abstract and complex volume datasets, defined in three- or higher-dimensional space. It has been increasingly important in various fields including meteorology, medical science, and computational fluid dynamics, and so on. On the other hand, virtual reality is a research field focusing on various techniques that aid gaining experiences in virtual worlds with visual, auditory and tactile senses. In this paper, we have developed a visualization system for CAVE, an immersive 3D virtual environment system, which generates stereoscopic images from huge human volume datasets in real-time using an improved volume visualization technique. In order to complement the 3D texture-mapping based volume rendering methods, that easily slow down as data sizes increase, our system utilizes an image-based rendering technique to guarantee real-time performance. The system has been designed to offer a variety of user interface functionality for effective visualization. In this article, we present detailed description on our real-time stereoscopic visualization system, and show how the Visible Korean Human dataset is effectively visualized on CAVE.

Design of TMO Model based Dynamic Analysis Framework: Components and Metrics (TMO모델 기반의 동적 분석 프레임워크 설계 : 구성요소 및 측정지수)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.7
    • /
    • pp.377-392
    • /
    • 2005
  • A lot of studies to measure and analyze the system performance have been done in areas such as system modeling, performance measurement, monitoring, and performance prediction since the advent of a computer system. Studies on a framework to unify the performance related areas have rarely been performed although many studies in the various areas have been done, however. In the case of TMO(Time-Triggered Message-Triggered Object), a real-time programming model, it hardly provides tools and frameworks on the performance except a simple run-time monitor. So it is difficult to analyze the performance of the real-time system and the process based on TMO. Thus, in this paper, we propose a framework for the dynamic analysis of the real-time system based on TMO, TDAF(TMO based Dynamic Analysis Framework). TDAF treats all the processes for the performance measurement and analysis, and Provides developers with more reliable information systematically combining a load model, a performance model, and a reporting model. To support this framework, we propose a load model which is extended by applying TMO model to the conventional one, and we provide the load calculation algorithm to compute the load of TMO objects. Additionally, based on TMO model, we propose performance algorithms which implement the conceptual performance metrics, and we present the reporting model and algorithms which can derive the period and deadline for the real-time processes based on the load and performance value. In last, we perform some experiments to validate the reliability of the load calculation algorithm, and provide the experimental result.

Design and Implementation of a Physical Network Separation System using Virtual Desktop Service based on I/O Virtualization (입출력 가상화 기반 가상 데스크탑 서비스를 이용한 물리적 네트워크 망분리 시스템 설계 및 구현)

  • Kim, Sunwook;Kim, Seongwoon;Kim, Hakyoung;Chung, Seongkwon;Lee, Sookyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.506-511
    • /
    • 2015
  • IOV is a technology that supports one or more virtual desktops, and can share a single physical device. In general, the virtual desktop uses the virtual IO devices which are provided by virtualization SW, using SW emulation technology. Virtual desktops that use the IO devices based on SW emulation have a problem in which service quality and performance are declining. Also, they cannot support the high-end application operations such as 3D-based CAD and game applications. In this paper, we propose a physical network separation system using Virtual Desktop Service based on HW direct assignments to overcome these problems. The proposed system provides independent desktops that are used to access the intranet or internet using server virtualization technology in a physical desktop computer for the user. In addition, this system can also support a network separation without network performance degradation caused by inspection of the network packet for logical network separations and additional installations of the desktop for physical network separations.

ROUTE/DASH-SRD based Point Cloud Content Region Division Transfer and Density Scalability Supporting Method (포인트 클라우드 콘텐츠의 밀도 스케일러빌리티를 지원하는 ROUTE/DASH-SRD 기반 영역 분할 전송 방법)

  • Kim, Doohwan;Park, Seonghwan;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.849-858
    • /
    • 2019
  • Recent developments in computer graphics technology and image processing technology have increased interest in point cloud technology for inputting real space and object information as three-dimensional data. In particular, point cloud technology can accurately provide spatial information, and has attracted a great deal of interest in the field of autonomous vehicles and AR (Augmented Reality)/VR (Virtual Reality). However, in order to provide users with 3D point cloud contents that require more data than conventional 2D images, various technology developments are required. In order to solve these problems, an international standardization organization, MPEG(Moving Picture Experts Group), is in the process of discussing efficient compression and transmission schemes. In this paper, we provide a region division transfer method of 3D point cloud content through extension of existing MPEG-DASH (Dynamic Adaptive Streaming over HTTP)-SRD (Spatial Relationship Description) technology, quality parameters are further defined in the signaling message so that the quality parameters can be selectively determined according to the user's request. We also design a verification platform for ROUTE (Real Time Object Delivery Over Unidirectional Transport)/DASH based heterogeneous network environment and use the results to validate the proposed technology.

The Trends and Prospects of ICT based Education (ICT를 활용한 교육의 동향과 전망)

  • Woo, Hyun-Jeong;Jo, Hye-Jeong;Choi, Yool
    • Informatization Policy
    • /
    • v.25 no.4
    • /
    • pp.3-36
    • /
    • 2018
  • This article discusses the possibilities and limitations of ICT education by reviewing the previous research on its various aspects including educational goals, contents, methods, and evaluation. First, when it comes to its educational goal, the prior studies suggest that ICT education aims to nurture digital citizenship among students and to enable them to participate in different sectors of our society. ICT education characterizes the core capacities of its future learners as 'lifelong learners,' 'information producers/consumers,' and 'local/global citizens.' Second, in regard to the educational content of ICT education, researchers investigate SW education importantly: They develop the educational programs and examine the effectiveness of those programs. However, to ensure the relevancy of the educational contents to the future society, institutional support is imperative including facilitating educators' capacities and synchronizing ICT education with subject education. Third, as the educational methods, various ICTs such as flipped learning and augmented reality (AR) are being applied to actual classroom teaching. Research on the educational methods, which is the most vibrant area in the ICT education scholarship, is expected to improve the previous educational methods and to lead the qualitative development of ICT education. Fourth, the previous discussion on the educational evaluation focuses on computer-based evaluations. Educational evaluation using ICT will enable educators to assess the characteristics and achievement of an individual learner accurately and to lead them to apply a teaching-learning process effectively, which will ultimately enhance the effectiveness of educational evaluation. Along with the overall review on the possibilities of ICT education, this article discusses the limitations of the current ICT education and its implications for educational inequalities.

Design and Evaluation of an Efficient Flushing Scheme for key-value Store (키-값 저장소를 위한 효율적인 로그 처리 기법 설계 및 평가)

  • Han, Hyuck
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.5
    • /
    • pp.187-193
    • /
    • 2019
  • Key-value storage engines are an essential component of growing demand in many computing environments, including social networks, online e-commerce, and cloud services. Recent key-value storage engines offer many features such as transaction, versioning, and replication. In a key-value storage engine, transaction processing provides atomicity through Write-Ahead-Logging (WAL), and a synchronous commit method for transaction processing flushes log data before the transaction completes. According to our observation, flushing log data to persistent storage is a performance bottleneck for key-value storage engines due to the significant overhead of fsync() calls despite the various optimizations of existing systems. In this article, we propose a group synchronization method to improve the performance of the key-value storage engine. We also design and implement a transaction scheduling method to perform other transactions while the system processes fsync() calls. The proposed method is an efficient way to reduce the number of frequent fsync() calls in the synchronous commit while supporting the same level of transaction provided by the existing system. We implement our scheme on the WiredTiger storage engine and our experimental results show that the proposed system improves the performance of key-value workloads over existing systems.

A Study on the Methods of Building Tools and Equipment for Digital Forensics Laboratory (디지털증거분석실의 도구·장비 구축 방안에 관한 연구)

  • Su-Min Shin;Hyeon-Min Park;Gi-Bum Kim
    • Convergence Security Journal
    • /
    • v.22 no.5
    • /
    • pp.21-35
    • /
    • 2022
  • The use of digital information according to the development of information and communication technology and the 4th industrial revolution is continuously increasing and diversifying, and in proportion to this, crimes using digital information are also increasing. However, there are few cases of establishing an environment for processing and analysis of digital evidence in Korea. The budget allocated for each organization is different and the digital forensics laboratory built without solving the chronic problem of securing space has a problem in that there is no standard that can be referenced from the initial configuration stage. Based on this awareness of the problem, this thesis conducted an exploratory study focusing on tools and equipment necessary for building a digital forensics laboratory. As a research method, focus group interviews were conducted with 15 experts with extensive practical experience in the digital forensic laboratory or digital forensics field and experts' opinions were collected on the following 9 areas: network configuration, analyst computer, personal tools·equipment, imaging devices, dedicated software, open source software, common tools/equipment, accessories, and other considerations. As a result, a list of tools and equipment for digital forensic laboratories was derived.

The Design of Mobile Medical Image Communication System based on CDMA 1X-EVDO for Emergency Care (CDMA2000 1X-EVDO망을 이용한 이동형 응급 의료영상 전송시스템의 설계)

  • Kang, Won-Suk;Yong, Kun-Ho;Jang, Bong-Mun;Namkoong, Wook;Jung, Hai-Jo;Yoo, Sun-Kook;Kim, Hee-Joung
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2004.11a
    • /
    • pp.53-55
    • /
    • 2004
  • In emergency cases, such as the severe trauma involving the fracture of skull, spine, or cervical bone, from auto accident or a fall, and/or pneumothorax which can not be diagnosed exactly by the eye examination, it is necessary the radiological examination during transferring to the hospital for emergency care. The aim of this study was to design and evaluate the prototype of mobile medical image communication system based on CDMA 1X EVDO. The system consists of a laptop computer used as a transmit DICOM client, linked with cellular phone which support to the CDMA 1X EVDO communication service, and a receiving DICOM server installed in the hospital. The DR images were stored with DICOM format in the storage of transmit client. Those images were compressed into JPEG2000 format and transmitted from transmit client to the receiving server. All of those images were progressively transmitted to the receiving server and displayed on the server monitor. To evaluate the image quality, PSNR of compressed image was measured. Also, several field tests had been performed using commercial CDMA2000 1X-EVDO reverse link with the TCP/IP data segments. The test had been taken under several velocity of vehicle in seoul areas.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.