• Title/Summary/Keyword: design of algorithms

Search Result 2,722, Processing Time 0.034 seconds

A Study on the Implementation of Coexistent Reality Technology for Ship Outfitting Inspection (선박 의장 검사를 위한 공존현실 기술 적용에 관한 연구)

  • Ha, Yeon-Chul;Kim, Jin-Woo;Kim, Goo;Shin, Hyun-Shil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.1
    • /
    • pp.13-20
    • /
    • 2020
  • In shipyards, internal materials are assembled after designing and manufacturing each ship's block. Internal material assembly means the installation of parts and equipment except ship's body. In this process, if the assembly of pipes and equipment existing in the block is not done correctly during the assembly between blocks, this causes a lot of costs. In addition, even if the assembly of the internal materials already completed, the production efficiency of the ship is reduced due to rework when problems such as space arrangement of the internal materials occurs. Therefore, this study introduces space arrangement and inspection system before and after hull outfitting work based on coexistence reality technology using 3D design drawing to solve these problems. The various coexistence reality algorithms and inspection systems developed and introduced in this study are based on AR service, which has never been introduced in Korea. So it will be widely applicable to various manufacturing industries using design drawings such as automobiles and architectures as well as ship building process.

An Efficient Subsequence Matching Method Based on Index Interpolation (인덱스 보간법에 기반한 효율적인 서브시퀀스 매칭 기법)

  • Loh Woong-Kee;Kim Sang-Wook
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.345-354
    • /
    • 2005
  • Subsequence matching is one of the most important operations in the field of data mining. The existing subsequence matching algorithms use only one index, and their performance gets worse as the difference between the length of a query sequence and the site of windows, which are subsequences of a same length extracted from data sequences to construct the index, increases. In this paper, we propose a new subsequence matching method based on index interpolation to overcome such a problem. An index interpolation method constructs two or more indexes, and performs search ing by selecting the most appropriate index among them according to the given query sequence length. In this paper, we first examine the performance trend with the difference between the query sequence length and the window size through preliminary experiments, and formulate a search cost model that reflects the distribution of query sequence lengths in the view point of the physical database design. Next, we propose a new subsequence matching method based on the index interpolation to improve search performance. We also present an algorithm based on the search cost formula mentioned above to construct optimal indexes to get better search performance. Finally, we verify the superiority of the proposed method through a series of experiments using real and synthesized data sets.

Design of Digital PLL with Asymmetry Compensator in High Speed DVD Systems (고속 DVD 시스템에서 비대칭 신호 보정기와 결합한 Digital PLL 설계)

  • 김판수;고석준;최형진;이정현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2000-2011
    • /
    • 2001
  • In this Paper, we convert conventional low speed(1x, 6x) DVD systems designed by analog PLL(Phase Locked Loop) into digital PLL to operate at high speed systems flexibly, and present optimal DPLL model in high speed(20x) DVD systems. Especially, we focused on the design of DPLL that can overcome channel effects such as bulk delay, sampling clock frequency offset and asymmetry phenomenon in high speed DVD systems. First, the modified Early-Late timing error detector as digital timing recovery scheme is proposed. And the four-sampled compensation algorithm using zero crossing point as asymmetry compensator is designed to achieve high speed operation and strong reliability. We show that the proposed timing recovery algorithm provides enhanced performances in jitter valiance and SNR margin by 4 times and 3dB respectively. Also, the new four-sampled zero crossing asymmetry compensation algorithm provides 34% improvement of jitter performance, 50% reduction of compensation time and 2.0dB gain of SNR compared with other algorithms. Finally, the proposed systems combined with asymmetry compensator and DPLL are shown to provide improved performance of about 0.4dB, 2dB over the existing schemes by BER evaluation.

  • PDF

Pre/Post processor for structural analysis simulation integration with open source solver (Calculix, Code_Aster) (오픈소스 솔버(Calculix, Code_Aster)를 통합한 구조해석 시뮬레이션 전·후처리기 개발)

  • Seo, Dong-Woo;Kim, Jae-Sung;Kim, Myung-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.425-435
    • /
    • 2017
  • Structural analysis is used not only for large enterprises, but also for small and medium sized ones, as a necessary procedure for strengthening the certification process for product delivery and shortening the time in the process from concept design to detailed design. Open-source solvers that can be used atlow cost differ from commercial solvers. If there is a problem with the input data, such as with the grid, errors or failures can occur in the calculation step. In this paper, we propose a pre- and post-processor that can be easily applied to the analysis of mechanical structural problems by using the existing structural analysis open source solver (Caculix, Code_Aster). In particular, we propose algorithms for analyzing different types of data using open source solvers in order to extract and generate accurate information,such as 3D models, grids and simulation conditions, and develop and apply information analysis. In addition, to improve the accuracy of open source solvers and to prevent errors, we created a grid that matches the solver characteristics and developed an automatic healing function for the grid model. Finally, to verify the accuracy of the system, the verification and utilization results are compared with the software used.

Design and Implementation of Buffer Cache for EXT3NS File System (EXT3NS 파일 시스템을 위한 버퍼 캐시의 설계 및 구현)

  • Sohn, Sung-Hoon;Jung, Sung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2202-2211
    • /
    • 2006
  • EXT3NS is a special-purpose file system for large scale multimedia streaming servers. It is built on top of streaming acceleration hardware device called Network-Storage card. The EXT3NS file system significantly improves streaming performance by eliminating memory-to-memory copy operations, i.e. sending video/audio from disk directly to network interface with no main memory buffering. In this paper, we design and implement a buffer cache mechanism, called PMEMCACHE, for EXT3NS file system. We also propose a buffer cache replacement method called ONS for the buffer cache mechanism. The ONS algorithm outperforms other existing buffer replacement algorithms in distributed multimedia streaming environment. In EXT3NS with PMEMCACHE, operation is 33MB/sec and random read operation is 2.4MB/sec. Also, the buffer replacement ONS algorithm shows better performance by 600KB/sec than other buffer cache replacement policies. As a result PMEMCACHE and an ONS can greatly improve the performance of multimedia steaming server which should supportmultiple client requests at the same time.

Design and Implementation of Autonomic De-fragmentation for File System Aging (파일 시스템 노화를 해소하기 위한 자동적인 단편화 해결 시스템의 설계와 구현)

  • Lee, Jun-Seok;Park, Hyun-Chan;Yoo, Chuck
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.101-112
    • /
    • 2009
  • Existing techniques for defragmentation of the file system need intensive disk operation for some periods at specific time such as disk defragmentation program. In this paper, for solving this problem, we design and implement the automatic and continuous defragmentation free system by distributing the disk operation. We propose the Automatic Layout Scoring(ALS) mechanism for measuring defragmentation degree and suggest the Lazy Copy mechanism that copies the defragmented data at idle time for scattering the disk operation. We search the defragmented file by Automatic Layout Scoring mechanism and then find for empty spaces for that searched file. After lazy copy of searched fils to empty space for preventing that file from being lost, the algorithm solves the defragmentation problem by updating the I-node of that file. We implement these algorithms in Linux and evaluate them for small and defragmented file to get the layout scoring. We outperform the Linux EXT2 file system by $2.4%{\sim}10.4%$ in layout scoring evaluation. And the performance of read and write for various file size is better than the EXT2 by $1%{\sim}8.5%$ for write performance and by $1.2%{\sim}7.5%$ for read performance. We suggest this system for solving the problem of defragmentation automatically without disturbing the I/O task and manual management.

Design of Multimode Block Cryptosystem for Network Security (네트워크 보안을 위한 다중모드 블록암호시스템의 설계)

  • 서영호;박성호;최성수;정용진;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.11C
    • /
    • pp.1077-1087
    • /
    • 2003
  • In this paper, we proposed an architecture of a cryptosystem with various operating modes for the network security and implemented in hardware using the ASIC library. For configuring a cryptosystem, the standard block ciphers such as AES, SEED and 3DES were included. And the implemented cryptosystem can encrypt and decrypt the data in real time through the wired/wireless network with the minimum latency time (minimum 64 clocks, maximum 256 clocks). It can support CTR mode which is widely used recently as well as the conventional block cipher modes such as ECB, CBC and OFB, and operates in the multi-bit mode (64, 128, 192, and 256 bits). The implemented hardware has the expansion possibility for the other algorithms according to the network security protocol such as IPsec and the included ciphering blocks can be operated simultaneously. The self-ciphering mode and various ciphering mode can be supported by the hardware sharing and the programmable data-path. The global operation is programmed by the serial communication port and the operation is decided by the control signals decoded from the instruction by the host. The designed hardware using VHDL was synthesized with Hynix 0.25$\mu\textrm{m}$ CMOS technology and it used the about 100,000 gates. Also we could assure the stable operation in the timing simulation over 100㎒ using NC-verilog.

Usability of Multiple Confocal SPECT SYSTEM in the Myocardial Perfusion SPECT Using $^{99m}Tc$ ($^{99m}Tc$을 이용한 심근 관류 SPECT에서 Multiple Confocal SPECT System의 유용성)

  • Shin, Chae-Ho;Pyo, Sung-Jai;Kim, Bong-Su;Cho, Yong-Gyi;Jo, Jin-Woo;Kim, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.65-71
    • /
    • 2011
  • Purpose: The recently adopted multiple confocal SPECT SYSTEM (hereinafter called IQ SPECT$^{TM}$) has a high difference from the conventional myocardial perfusion SPECT in the collimator form, image capture method, and image reconstruction method. This study was conducted to compare this novice equipment with the conventional one to design a protocol meeting the IQ SPECT, and also determine the characteristics and usefulness of IQ SPECT. Materials and Methods: 1. For the objects of LEHR (Low energy high resolution) collimator and Multiple confocal collimator, $^{99m}Tc$ 37MBq was put in the acrylic dish then each sensitivity ($cpm/{\mu}Ci$) was measured at the distance of 5 cm, 10 cm, 20 cm, 30 cm, and 40 cm respectively. 2. Based on the sensitivity measure results, IQ SPECT Protocol was designed according to the conventional general myocardial SPECT, then respectively 278 kBq/mL, 7.4 kBq/mL, and 48 kBq/mL of $^{99m}Tc$ were injected into the myocardial and soft tissues and liver site by using the anthropomorphic torso phantom then the myocardial perfusion SPECT was run. 3. For the comparison of FWHMs (Full Width at Half Maximum) resulted from the image reconstruction of LEHR collimator, the FWHMs (mm) were measured with only algorithms changed, in the case of the FBP (Filtered Back projection) method- a reconstruction method of conventional myocardial perfusion SPECT, and the 3D OSEM (Ordered subsets expectation maximization) method of IQ SPECT, by using $^{99m}Tc$ Line source. Results: 1. The values of IQ SPECT collimator sensitivity ($cpm/{\mu}Ci$) were 302, 382, 655, 816, 1178, and those of LEHR collimator were measured as 204, 204, 202, 201, 198, both at the distance of 5 cm, 10 cm, 20 cm, 30 cm, and 40 cm respectively. It was found the difference of sensitivity increases up to 4 times at the distance of 30 cm in the cases of IQ SPECT and LEHR. 2. The myocardial perfusion SPECT Protocol was designed according to the geometric characteristics of IQ SPECT based on the sensitivity results, then the phantom test for the aforesaid protocol was conducted. As a result, it was found the examination time can be reduced 1/4 compared to the past. 3. In the comparison of FWHMs according to the reconstructed algorithm in the FBP method and 3D OSEM method followed after the SEPCT test using a LEHR collimator, the result was obtained that FWHM rose around twice in the 3D OSEM method. Conclusion : The IQ SPECT uses the Multiple confocal collimator for the myocardial perfusion SPECT to enhance the sensitivity and also reduces examination time and contributes to improvement of visual screen quality through the myocardial-specific geometric image capture method and image reconstruction method. Due to such benefits, it is expected patients will receive more comfortable and more accurate examinations and it is considered a further study is required using additional clinical materials.

  • PDF

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Development of a Classification Method for Forest Vegetation on the Stand Level, Using KOMPSAT-3A Imagery and Land Coverage Map (KOMPSAT-3A 위성영상과 토지피복도를 활용한 산림식생의 임상 분류법 개발)

  • Song, Ji-Yong;Jeong, Jong-Chul;Lee, Peter Sang-Hoon
    • Korean Journal of Environment and Ecology
    • /
    • v.32 no.6
    • /
    • pp.686-697
    • /
    • 2018
  • Due to the advance in remote sensing technology, it has become easier to more frequently obtain high resolution imagery to detect delicate changes in an extensive area, particularly including forest which is not readily sub-classified. Time-series analysis on high resolution images requires to collect extensive amount of ground truth data. In this study, the potential of land coverage mapas ground truth data was tested in classifying high-resolution imagery. The study site was Wonju-si at Gangwon-do, South Korea, having a mix of urban and natural areas. KOMPSAT-3A imagery taken on March 2015 and land coverage map published in 2017 were used as source data. Two pixel-based classification algorithms, Support Vector Machine (SVM) and Random Forest (RF), were selected for the analysis. Forest only classification was compared with that of the whole study area except wetland. Confusion matrixes from the classification presented that overall accuracies for both the targets were higher in RF algorithm than in SVM. While the overall accuracy in the forest only analysis by RF algorithm was higher by 18.3% than SVM, in the case of the whole region analysis, the difference was relatively smaller by 5.5%. For the SVM algorithm, adding the Majority analysis process indicated a marginal improvement of about 1% than the normal SVM analysis. It was found that the RF algorithm was more effective to identify the broad-leaved forest within the forest, but for the other classes the SVM algorithm was more effective. As the two pixel-based classification algorithms were tested here, it is expected that future classification will improve the overall accuracy and the reliability by introducing a time-series analysis and an object-based algorithm. It is considered that this approach will contribute to improving a large-scale land planning by providing an effective land classification method on higher spatial and temporal scales.