• Title/Summary/Keyword: model of computation

Search Result 2,062, Processing Time 0.024 seconds

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods (저계수 행렬 근사 및 CP 분해 기법을 이용한 CNN 압축)

  • Moon, HyeonCheol;Moon, Gihwa;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.125-131
    • /
    • 2021
  • In recent years, Convolutional Neural Networks (CNNs) have achieved outstanding performance in the fields of computer vision such as image classification, object detection, visual quality enhancement, etc. However, as huge amount of computation and memory are required in CNN models, there is a limitation in the application of CNN to low-power environments such as mobile or IoT devices. Therefore, the need for neural network compression to reduce the model size while keeping the task performance as much as possible has been emerging. In this paper, we propose a method to compress CNN models by combining matrix decomposition methods of LR (Low-Rank) approximation and CP (Canonical Polyadic) decomposition. Unlike conventional methods that apply one matrix decomposition method to CNN models, we selectively apply two decomposition methods depending on the layer types of CNN to enhance the compression performance. To evaluate the performance of the proposed method, we use the models for image classification such as VGG-16, RestNet50 and MobileNetV2 models. The experimental results show that the proposed method gives improved classification performance at the same range of 1.5 to 12.1 times compression ratio than the existing method that applies only the LR approximation.

Single Image Super Resolution Based on Residual Dense Channel Attention Block-RecursiveSRNet (잔여 밀집 및 채널 집중 기법을 갖는 재귀적 경량 네트워크 기반의 단일 이미지 초해상도 기법)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.429-440
    • /
    • 2021
  • With the recent development of deep convolutional neural network learning, deep learning techniques applied to single image super-resolution are showing good results. One of the existing deep learning-based super-resolution techniques is RDN(Residual Dense Network), in which the initial feature information is transmitted to the last layer using residual dense blocks, and subsequent layers are restored using input information of previous layers. However, if all hierarchical features are connected and learned and a large number of residual dense blocks are stacked, despite good performance, a large number of parameters and huge computational load are needed, so it takes a lot of time to learn a network and a slow processing speed, and it is not applicable to a mobile system. In this paper, we use the residual dense structure, which is a continuous memory structure that reuses previous information, and the residual dense channel attention block using the channel attention method that determines the importance according to the feature map of the image. We propose a method that can increase the depth to obtain a large receptive field and maintain a concise model at the same time. As a result of the experiment, the proposed network obtained PSNR as low as 0.205dB on average at 4× magnification compared to RDN, but about 1.8 times faster processing speed, about 10 times less number of parameters and about 1.74 times less computation.

Correction for SPECT image distortion by non-circular detection orbits (비원형 궤도에서의 검출에 의한 SPECT 영상 왜곡 보정)

  • Lee, Nam-Yong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.3
    • /
    • pp.156-162
    • /
    • 2007
  • The parallel beam SPECT system acquires projection data by using collimators in conjunction with photon detectors. The projection data of the parallel beam SPECT system is, however, blurred by the point response function of the collimator that is used to define the range of directions where photons can be detected. By increasing the number of parallel holes per unit area in collimator, one can reduce such blurring effect. This approach also, however, has the blurring problem if the distance between the object and the collimator becomes large. In this paper we consider correction methods for artifacts caused by non-circular orbit of parallel beam SPECT with many parallel holes per detector cell. To do so, we model the relationship between the object and its projection data as a linear system, and propose an iterative reconstruction method including artifacts correction. We compute the projector and the backprojector, which are required in iterative method, as a sum of convolutions with distance-dependent point response functions instead of matrix form, where those functions are analytically computed from a single function. By doing so, we dramatically reduce the computation time and memory required for the generation of the projector and the backprojector. We conducted several simulation studies to compare the performance of the proposed method with that of conventional Fourier method. The result shows that the proposed method outperforms Fourier methods objectively and subjectively.

  • PDF

Primary Solution Evaluations for Interpreting Electromagnetic Data (전자탐사 자료 해석을 위한 1차장 계산)

  • Kim, Hee-Joon;Choi, Ji-Hyang;Han, Nu-Ree;Song, Yoon-Ho;Lee, Ki-Ha
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.4
    • /
    • pp.361-366
    • /
    • 2009
  • Layered-earth Green's functions in electormagnetic (EM) surveys play a key role in modeling the response of exploration targets. They are computed through the Hankel transforms of analytic kernels. Computational precision depends upon the choice of algebraically equivalent forms by which these kemels are expressed. Since three-dimensional (3D) modeling can require a huge number of Green's function evaluations, total computational time can be influenced by computational time for the Hankel transform evaluations. Linear digital filters have proven to be a fast and accurate method of computing these Hankel transforms. In EM modeling for 3D inversion, electric fields are generally evaluated by the secondary field formulation to avoid the singularity problem. In this study, three components of electric fields for five different sources on the surface of homogeneous half-space were derived as primary field solutions. Moreover, reflection coefficients in TE and TM modes were produced to calculate EM responses accurately for a two-layered model having a sea layer. Accurate primary fields should substantially improve accuracy and decrease computation times for Green's function-based problems like MT problems and marine EM surveys.

Implementation of GIS-based Application Program for Circuity and Accessibility Analysis in Road Network Graph (도로망 그래프의 우회도와 접근도 분석을 위한 GIS 응용 프로그램 개발)

  • Lee, Kiwon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.1
    • /
    • pp.84-93
    • /
    • 2004
  • Recently, domain-specific demands with respect to practical applications and analysis scheme using spatial thematic information are increasing. Accordingly, in this study, GIS-based application program is implemented to perform spatial analysis in transportation geography with base road layer data. Using this program, quantitative estimation of circuity and accessibility, which can be extracted from nodes composed of the graph-typed network structure, in a arbitrary analysis zone or administrative boundary zone is possible. Circuity is a concept to represent the difference extent between actual nodes and fully connected nodes in the analysis zone. While, accessibility can be used to find out extent of accessibility or connectivity between all nodes contained in the analysis zone, judging from inter-connecting status of the whole nodes. In put data of this program, which was implemented in AVX executable extension using AvenueTM of ArcView, is not transportation database information based on transportation data model, but layer data, directly obtaining from digital map sets. It is thought that computation of circuity and accessibility can be used as kinds of spatial analysis functions for GIS applications in the transportation field.

  • PDF

Verifiable Could-Based Personal Health Record with Recovery Functionality Using Zero-Knowledge Proof (영지식 증명을 활용한 복원 기능을 가진 검증 가능한 클라우드 기반의 개인 건강기록)

  • Kim, Hunki;Kim, Jonghyun;Lee, Dong Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.999-1012
    • /
    • 2020
  • As the utilize of personal health records increases in recent years, research on cryptographic protocol for protecting personal information of personal health records has been actively conducted. Currently, personal health records are commonly encrypted and outsourced to the cloud. However, this method is limited in verifying the integrity of personal health records, and there is a problem with poor data availability because it is essential to use it in decryption. To solve this problem, this paper proposes a verifiable cloud-based personal health record management scheme using Redactable signature scheme and zero-knowledge proof. Verifiable cloud-based personal health record management scheme can be used to verify the integrity of the original document while preserving privacy by deleting sensitive information by using Redactable signature scheme, and to verify that the redacted document has not been deleted or modified except for the deleted part of the original document by using the zero-knowledge proof. In addition, it is designed to increase the availability of data than the existing management schemes by designing to recover deleted parts only when necessary through the Redact Recovery Authority. And we propose a verifiable cloud-based personal health record management model using the proposed scheme, and analysed its efficiency by implementing the proposed scheme.

Assessment of the Inundation Area and Volume of Tonle Sap Lake using Remote Sensing and GIS (원격탐사와 GIS를 이용한 Tonle Sap호의 홍수량 평가)

  • Chae, Hyosok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.3
    • /
    • pp.96-106
    • /
    • 2005
  • The ability of remote sensing and GIS technique, which used to provide valuable informations in the time and space domain, has been known to be very useful in providing permanent records by mapping and monitoring flooded area. In 2000, floods were at the worst stage of devastation in Tonle Sap Lake, Mekong River Basin, for the second time in records during July and October. In this study, Landsat ETM+ and RADARSAT imagery were used to obtain the basic information on computation of the inundation area and volume using ISODATA classifier and segmentation technique. However, the extracted inundatton area showed only a small fraction than the actually inundated area because of clouds in the imagery and complex ground conditions. To overcome these limitations, the cost-distance method of GIS was used to estimate the inundated area at the peak level by integrating the inundated area from satellite imagery in corporation with digital elevation model (DEM). The estimated inundation area was simply converted with the inundation volume using GIS. The inundation volume was compared with the volume based on hydraulic modeling with MIKE 11. which is the most poppular among the dynamic river modeling system. The method is suitable for estimating inundation volume even when Landsat ETM+ has many clouds in the imagery.

  • PDF

Sensitivity Analysis of Runoff-Quality Parameters in the Urban Basin (도시 배수유역의 유출-수질 특성인자의 민감도 분석)

  • Lee, Jong-Tae;Gang, Tae-Ho
    • Journal of Korea Water Resources Association
    • /
    • v.30 no.1
    • /
    • pp.83-93
    • /
    • 1997
  • The purpose of the study is to analyze the sensitivity of the parameters that affect the runoff and water quality in the studied drainage basins. SWMM model is applied to the four drainage basins located at Namgazwa and Sanbon in Seoul and Gray Haven and Kings Creek in the USA. first of all, the optimum values of the parameters which have least simulation error to the observed data, are detected by iteration procedure. These are used as the standard values which are compared against the procedure. These are used as the standard values which are compared against the varied parameter values. In order to catch the effectiveness of the parameters to the computing result, the parameters are changed step by setp, and the results are compared to the standard results in flowerate and quality of the sewer. The study indicates that the discharge is greatly affected by the types of runoff surface, i.e., impervious area remarkably affects the peak flow and runoff volume while the surface storage affects the runoff volume at mild sloped basins. In addition, the major parameters affecting the pollution concentrations and loadings are the contaminant accumulation coefficient per unit area per time and the continuous dry weather days. Furthermore, the factors that affect the water quality during the initial rainfall period are the rainfall intensity, transport capacity coefficient and its power coefficient. Consequently, in order to simulate the runoff-water quality, it is needed to evaluate previous data in the research performed for the studied basins. To accurately estimated from the tributary areas and the rational computation methods of the pollutants calculation should be introduced.

  • PDF

Prestack Depth Migration for Gas Hydrate Seismic Data of the East Sea (동해 가스 하이드레이트 탄성파자료의 중합전 심도 구조보정)

  • Jang, Seong-Hyung;Suh, Sang-Yong;Go, Gin-Seok
    • Economic and Environmental Geology
    • /
    • v.39 no.6 s.181
    • /
    • pp.711-717
    • /
    • 2006
  • In order to study gas hydrate, potential future energy resources, Korea Institute of Geoscience and Mineral Resources has conducted seismic reflection survey in the East Sea since 1997. one of evidence for presence of gas hydrate in seismic reflection data is a bottom simulating reflector (BSR). The BSR occurs at the interface between overlaying higher velocity, hydrate-bearing sediment and underlying lower velocity, free gas-bearing sediment. That is often characterized by large reflection coefficient and reflection polarity reverse to that of seafloor reflection. In order to apply depth migration to seismic reflection data. we need high performance computers and a parallelizing technique because of huge data volume and computation. Phase shift plus interpolation (PSPI) is a useful method for migration due to less computing time and computational efficiency. PSPI is intrinsically parallelizing characteristic in the frequency domain. We conducted conventional data processing for the gas hydrate data of the Ease Sea and then applied prestack depth migration using message-passing-interface PSPI (MPI_PSPI) that was parallelized by MPI local-area-multi-computer (MPI_LAM). Velocity model was made using the stack velocities after we had picked horizons on the stack image with in-house processing tool, Geobit. We could find the BSRs on the migrated stack section were about at SP 3555-4162 and two way travel time around 2,950 ms in time domain. In depth domain such BSRs appear at 6-17 km distance and 2.1 km depth from the seafloor. Since energy concentrated subsurface was well imaged we have to choose acquisition parameters suited for transmitting seismic energy to target area.

Simplification of State Invariant with Mixed Reachability Analysis (혼합 도달성 분석을 이용한 상태 불변식의 단순화)

  • 권기현
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.212-218
    • /
    • 2003
  • State invariant is a property that holds in every reachable state. It can be used not only in understanding and analyzing complex software systems, but it can also be used for system verifications such as checking safety, liveness, and consistency. For these reasons, there are many vital researches for deriving state invariant from finite state machine models. In previous works every reachable state is to be considered to generate state invariant. Thus it is likely to be too complex for the user to understand. This paper seeks to answer the question `how to simplify state invariant\ulcorner`. Since the complexity of state invariant is strongly dependent upon the size of states to be considered, so the smaller the set of states to be considered is, the shorter the length of state invariant is. For doing so, we let the user focus on some interested scopes rather than a whole state space in a model. Computation Tree Logic(CTL) is used to specify scopes in which he/she is interested. Given a scope in CTL, mixed reachability analysis is used to find out a set of states inside it. Obviously, a set of states calculated in this way is a subset of every reachable state. Therefore, we give a weaker, but comprehensible, state invariant.