• Title/Summary/Keyword: kernel technique

Search Result 262, Processing Time 0.02 seconds

Massive Fluid Simulation Using a Responsive Interaction Between Surface and Wave Foams (수면거품과 웨이브거품의 미세한 상호작용을 이용한 대규모 유체 시뮬레이션)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.29-39
    • /
    • 2017
  • This paper presents a unified framework to efficiently and realistically simulate surface and wave foams. The framework is designed to first project 3D water particles from an underlying water solver onto 2D screen space in order to reduce the computational complexity of determining where foam particles should be generated. Because foam effects are often created primarily in fast and complicated water flows, we analyze the acceleration and curvature values to identify the areas exhibiting such flow patterns. Foam particles are emitted from the identified areas in 3D space, and each foam particle is advected according to its type, which is classified on the basis of velocity, thereby capturing the essential characteristics of foam wave motions. We improve the realism of the resulting foam by classifying it into two types: surface foam and wave foam. Wave foam is characterized by the sharp wave patterns of torrential flow s, and surface foam is characterized by a cloudy foam shape even in water with reduced motion. Based on these features, we propose a technique to correct the velocity and position of a foam particle. In addition, we propose a kernel technique using the screen space density to efficiently reduce redundant foam particles, resulting in improved overall memory efficiency without loss of visual detail in terms of foam effects. Experiments convincingly demonstrate that the proposed approach is efficient and easy to use while delivering high-quality results.

Deep Learning-based Hyperspectral Image Classification with Application to Environmental Geographic Information Systems (딥러닝 기반의 초분광영상 분류를 사용한 환경공간정보시스템 활용)

  • Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1061-1073
    • /
    • 2017
  • In this study, images were classified using convolutional neural network (CNN) - a deep learning technique - to investigate the feasibility of information production through a combination of artificial intelligence and spatial data. CNN determines kernel attributes based on a classification criterion and extracts information from feature maps to classify each pixel. In this study, a CNN network was constructed to classify materials with similar spectral characteristics and attribute information; this is difficult to achieve by conventional image processing techniques. A Compact Airborne Spectrographic Imager(CASI) and an Airborne Imaging Spectrometer for Application (AISA) were used on the following three study sites to test this method: Site 1, Site 2, and Site 3. Site 1 and Site 2 were agricultural lands covered in various crops,such as potato, onion, and rice. Site 3 included different buildings,such as single and joint residential facilities. Results indicated that the classification of crop species at Site 1 and Site 2 using this method yielded accuracies of 96% and 99%, respectively. At Site 3, the designation of buildings according to their purpose yielded an accuracy of 96%. Using a combination of existing land cover maps and spatial data, we propose a thematic environmental map that provides seasonal crop types and facilitates the creation of a land cover map.

Hardware Architecture of High Performance Cipher for Security of Digital Hologram (디지털 홀로그램의 보안을 위한 고성능 암호화기의 하드웨어 구조)

  • Seo, Young-Ho;Yoo, Ji-Sang;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.374-387
    • /
    • 2012
  • In this paper, we implement a new hardware for finding the significant coefficients of a digital hologram and ciphering them using discrete wavelet packet transform (DWPT). Discrete wavelet transform (DWT) and packetization of subbands is used, and the adopted ciphering technique can encrypt the subbands with various robustness based on the level of the wavelet transform and the threshold of subband energy. The hologram encryption consists of two parts; the first is to process DWPT, and the second is to encrypt the coefficients. We propose a lifting based hardware architecture for fast DWPT and block ciphering system with multi-mode for the various types of encryption. The unit cell which calculates the repeated arithmetic with the same structure is proposed and then it is expanded to the lifting kernel hardware. The block ciphering system is configured with three block cipher, AES, SEED and 3DES and encrypt and decrypt data with minimal latency time(minimum 128 clocks, maximum 256 clock) in real time. The information of a digital hologram can be hided by encrypting 0.032% data of all. The implemented hardware used about 200K gates in $0.25{\mu}m$ CMOS library and was stably operated with 165MHz clock frequency in timing simulation.

A Design and Implementation of RSS Data Collecting Engine based on Web 2.0 (웹 2.0 기반 RSS 데이터 수집 엔진의 설계 및 구현)

  • Kang, Pil-Gu;Kim, Jae-Hwan;Lee, Sang-Jun;Chae, Jin-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1496-1506
    • /
    • 2007
  • The environment of web service has changed a great deal due to the progress of internet technology and positive participation of users. The established web service is static and passive, but the recent web service is becoming dynamic and active. Web 2.0 reflects current web service change well. The primary feature of web 2.0 is positive participation of users. Since the size of generated information is becoming larger, it is highly required to share the information fast and correctly. The technology to satisfy this need is web syndication and tagging in web 2.0. The web syndication makes feeds for another site or users to receive the content of web site. In addition, the tagging is the kernel of a information. Many internet users share rapidly the information through tag search. In this paper, we propose the efficient technique to improve the web 2.0 technology such as web syndication and tagging by using the data collection engine. Data collection engine has stored in a database, a user's Web site to use the information. and it has a user's Web site with access to updated data to collect. The experimental results show that our approach can improve the search speed up to 3.14 times better than the existing method and reduce the size of data up to 66% for building associated tags.

  • PDF

An Efficient TCP Buffer Tuning Algorithm based on Packet Loss Ratio(TBT-PLR) (패킷 손실률에 기반한 효율적인 TCP Buffer Tuning 알고리즘)

  • Yoo Gi-Chul;Kim Dong-kyun
    • The KIPS Transactions:PartC
    • /
    • v.12C no.1 s.97
    • /
    • pp.121-128
    • /
    • 2005
  • Tho existing TCP(Transmission Control Protocol) is known to be unsuitable for a network with the characteristics of high RDP(Bandwidth-Delay Product) because of the fixed small or large buffer size at the TCP sender and receiver. Thus, some trial cases of adjusting the buffer sizes automatically with respect to network condition have been proposed to improve the end-to-end TCP throughput. ATBT(Automatic TCP fluffer Tuning) attempts to assure the buffer size of TCP sender according to its current congestion window size but the ATBT assumes that the buffer size of TCP receiver is maximum value that operating system defines. In DRS(Dynamic Right Sizing), by estimating the TCP arrival data of two times the amount TCP data received previously, the TCP receiver simply reserves the buffer size for the next arrival, accordingly. However, we do not need to reserve exactly two times of buffer size because of the possibility of TCP segment loss. We propose an efficient TCP buffer tuning technique(called TBT-PLR: TCP buffer tuning algorithm based on packet loss ratio) since we adopt the ATBT mechanism and the TBT-PLR mechanism for the TCP sender and the TCP receiver, respectively. For the purpose of testing the actual TCP performance, we implemented our TBT-PLR by modifying the linux kernel version 2.4.18 and evaluated the TCP performance by comparing TBT-PLR with the TCP schemes of the fixed buffer size. As a result, more balanced usage among TCP connections was obtained.

Parallel SystemC Cosimulation using Virtual Synchronization (가상 동기화 기법을 이용한 SystemC 통합시뮬레이션의 병렬 수행)

  • Yi, Young-Min;Kwon, Seong-Nam;Ha, Soon-Hoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.12
    • /
    • pp.867-879
    • /
    • 2006
  • This paper concerns fast and time accurate HW/SW cosimulation for MPSoC(Multi-Processor System-on-chip) architecture where multiple software and/or hardware components exist. It is becoming more and more common to use MPSoC architecture to design complex embedded systems. In cosimulation of such architecture, as the number of the component simulators participating in the cosimulation increases, the time synchronization overhead among simulators increases, thereby resulting in low overall cosimulation performance. Although SystemC cosimulation frameworks show high cosimulation performance, it is in inverse proportion to the number of simulators. In this paper, we extend the novel technique, called virtual synchronization, which boosts cosimulation speed by reducing time synchronization overhead: (1) SystemC simulation is supported seamlessly in the virtual synchronization framework without requiring the modification on SystemC kernel (2) Parallel execution of component simulators with virtual synchronization is supported. We compared the performance and accuracy of the proposed parallel SystemC cosimulation framework with MaxSim, a well-known commercial SystemC cosimulation framework, and the proposed one showed 11 times faster performance for H.263 decoder example, while the accuracy was maintained below 5%.

Improvement of Endoscopic Image using De-Interlacing Technique (De-Interlace 기법을 이용한 내시경 영상의 화질 개선)

  • 신동익;조민수;허수진
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.469-476
    • /
    • 1998
  • In the case of acquisition and displaying medical Images such as ultrasonography and endoscopy on VGA monitor of PC system, image degradation of tear-drop appears through scan conversion. In this study, we compare several methods which can solve this degradation and implement the hardware system that resolves this problem in real-time with PC. It is possible to represent high quality image display and real-time processing and acquisition with specific de-interlacing device and PCI bridge on our hardware system. Image quality is improved remarkably on our hardware system. It is implemented as PC-based system, so acquiring, saving images and describing text comment on those images and PACS networking can be easily implemented.metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF

Effect of Band Spotty Fertilization on the Yield and Growth of Peanut(Arachis hypogaea L.) in Plastic Film Mulching Cultivation (비닐피복 땅콩 재배시 생육 및 수량에 미치는 국소시비 효과)

  • Yang, Chang-Hyu;Yoo, Chul-Hyun;Shin, Bok-woo;Cheong, Young-Keun;Kang, Seung-Won
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.39 no.5
    • /
    • pp.298-302
    • /
    • 2006
  • This study was carried out to establish low-input fertilization and seeding technique using the simultaneous with seeding and fertilizer application machine and band spotty applicator which were manufactured for experiment during cultivation of mulching for peanut(Arachis hypogaea L.). The labor hour for seeding by simultaneous with seeding and fertilizing machine was appeared over 90% reduction effect compared with control plot($17.3hr\;10a^{-1}$). In band spotty fertilization plots, the emergence date was delayed about 4 days and the seedling stand rate was decreased 11~18% compared with control plot(man power). The content of total nitrogen of soil after experiment was increased while the contents of organic matter, available phosphate and exchangeable potassium were decreased than before experiment. The content of nitrogen forming nitrate was increased in band spotty fertilization(BSF) plots by increasing the amount of applied fertilizer from early growth stage till the middle growth stage. Growth rate was increased in band spotty fertilization plots and the absorbed amount of phosphate and potassium for peanut were increased in 70% band spotty fertilization plot compared with control plot. Yield of peanut was increased 70% in band spotty fertilization plot due to high pod kernel ratio and ripened pod rate compared with control plot($3,150kg\;ha^{-1}$). It was found that 70% band spotty fertilization was more effective as fertilization method to reduce both environmental pollution and chemical nitrogen fertilizer in plastic film mulching cultivation of peanut.

Development of Software-Defined Perimeter-based Access Control System for Security of Cloud and IoT System (Cloud 및 IoT 시스템의 보안을 위한 소프트웨어 정의 경계기반의 접근제어시스템 개발)

  • Park, Seung-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.15-26
    • /
    • 2021
  • Recently, as the introduction of cloud, mobile, and IoT has become active, there is a growing need for technology development that can supplement the limitations of traditional security solutions based on fixed perimeters such as firewalls and Network Access Control (NAC). In response to this, SDP (Software Defined Perimeter) has recently emerged as a new base technology. Unlike existing security technologies, SDP can sets security boundaries (install Gateway S/W) regardless of the location of the protected resources (servers, IoT gateways, etc.) and neutralize most of the network-based hacking attacks that are becoming increasingly sofiscated. In particular, SDP is regarded as a security technology suitable for the cloud and IoT fields. In this study, a new access control system was proposed by combining SDP and hash tree-based large-scale data high-speed signature technology. Through the process authentication function using large-scale data high-speed signature technology, it prevents the threat of unknown malware intruding into the endpoint in advance, and implements a kernel-level security technology that makes it impossible for user-level attacks during the backup and recovery of major data. As a result, endpoint security, which is a weak part of SDP, has been strengthened. The proposed system was developed as a prototype, and the performance test was completed through a test of an authorized testing agency (TTA V&V Test). The SDP-based access control solution is a technology with high potential that can be used in smart car security.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.