Processing math: 100%
  • Title/Summary/Keyword: Processing Accuracy

Search Result 3,857, Processing Time 0.029 seconds

Parameter-Efficient Neural Networks Using Template Reuse (템플릿 재사용을 통한 패러미터 효율적 신경망 네트워크)

  • Kim, Daeyeon;Kang, Woochul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.169-176
    • /
    • 2020
  • Recently, deep neural networks (DNNs) have brought revolutions to many mobile and embedded devices by providing human-level machine intelligence for various applications. However, high inference accuracy of such DNNs comes at high computational costs, and, hence, there have been significant efforts to reduce computational overheads of DNNs either by compressing off-the-shelf models or by designing a new small footprint DNN architecture tailored to resource constrained devices. One notable recent paradigm in designing small footprint DNN models is sharing parameters in several layers. However, in previous approaches, the parameter-sharing techniques have been applied to large deep networks, such as ResNet, that are known to have high redundancy. In this paper, we propose a parameter-sharing method for already parameter-efficient small networks such as ShuffleNetV2. In our approach, small templates are combined with small layer-specific parameters to generate weights. Our experiment results on ImageNet and CIFAR100 datasets show that our approach can reduce the size of parameters by 15%-35% of ShuffleNetV2 while achieving smaller drops in accuracies compared to previous parameter-sharing and pruning approaches. We further show that the proposed approach is efficient in terms of latency and energy consumption on modern embedded devices.

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

Development of a Daily Pattern Clustering Algorithm using Historical Profiles (과거이력자료를 활용한 요일별 패턴분류 알고리즘 개발)

  • Cho, Jun-Han;Kim, Bo-Sung;Kim, Seong-Ho;Kang, Weon-Eui
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.4
    • /
    • pp.11-23
    • /
    • 2011
  • The objective of this paper is to develop a daily pattern clustering algorithm using historical traffic data that can reliably detect under various traffic flow conditions in urban streets. The developed algorithm in this paper is categorized into two major parts, that is to say a macroscopic and a microscopic points of view. First of all, a macroscopic analysis process deduces a daily peak/non-peak hour and emphasis analysis time zones based on the speed time-series. A microscopic analysis process clusters a daily pattern compared with a similarity between individuals or between individual and group. The name of the developed algorithm in microscopic analysis process is called "Two-step speed clustering (TSC) algorithm". TSC algorithm improves the accuracy of a daily pattern clustering based on the time-series speed variation data. The experiments of the algorithm have been conducted with point detector data, installed at a Ansan city, and verified through comparison with a clustering techniques using SPSS. Our efforts in this study are expected to contribute to developing pattern-based information processing, operations management of daily recurrent congestion, improvement of daily signal optimization based on TOD plans.

An Algorithm for Segmenting the License Plate Region of a Vehicle Using a Color Model (차량번호판 색상모델에 의한 번호판 영역분할 알고리즘)

  • Jun Young-Min;Cha Jeong-Hee
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.21-32
    • /
    • 2006
  • The license plate recognition (LPR) unit consists of the following core components: plate region segmentation, individual character extraction, and character recognition. Out of the above three components, accuracy in the performance of plate region segmentation determines the overall recognition rate of the LPR unit. This paper proposes an algorithm for segmenting the license plate region on the front or rear of a vehicle in a fast and accurate manner. In the case of the proposed algorithm images are captured on the spot where unmanned monitoring of illegal parking and stowage is performed with a variety of roadway environments taken into account. As a means of enhancing the segmentation performance of the on-the-spot-captured images of license plate regions, the proposed algorithm uses a mathematical model for license plate colors to convert color images into digital data. In addition, this algorithm uses Gaussian smoothing and double threshold to eliminate image noises, one-pass boundary tracing to do region labeling, and MBR to determine license plate region candidates and extract individual characters from the determined license plate region candidates, thereby segmenting the license plate region on the front or rear of a vehicle through a verification process. This study contributed to addressing the inability of conventional techniques to segment the license plate region on the front or rear of a vehicle where the frame of the license plate is damaged, through processing images in a real-time manner, thereby allowing for the practical application of the proposed algorithm.

Computation ally Efficient Video Object Segmentation using SOM-Based Hierarchical Clustering (SOM 기반의 계층적 군집 방법을 이용한 계산 효율적 비디오 객체 분할)

  • Jung Chan-Ho;Kim Gyeong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.74-86
    • /
    • 2006
  • This paper proposes a robust and computationally efficient algorithm for automatic video object segmentation. For implementing the spatio-temporal segmentation, which aims for efficient combination of the motion segmentation and the color segmentation, an SOM-based hierarchical clustering method in which the segmentation process is regarded as clustering of feature vectors is employed. As results, problems of high computational complexity which required for obtaining exact segmentation results in conventional video object segmentation methods, and the performance degradation due to noise are significantly reduced. A measure of motion vector reliability which employs MRF-based MAP estimation scheme has been introduced to minimize the influence from the motion estimation error. In addition, a noise elimination scheme based on the motion reliability histogram and a clustering validity index for automatically identifying the number of objects in the scene have been applied. A cross projection method for effective object tracking and a dynamic memory to maintain temporal coherency have been introduced as well. A set of experiments has been conducted over several video sequences to evaluate the proposed algorithm, and the efficiency in terms of computational complexity, robustness from noise, and higher segmentation accuracy of the proposed algorithm have been proved.

Sleep/Wake Dynamic Classifier based on Wearable Accelerometer Device Measurement (웨어러블 가속도 기기 측정에 의한 수면/비수면 동적 분류)

  • Park, Jaihyun;Kim, Daehun;Ku, Bonhwa;Ko, Hanseok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.126-134
    • /
    • 2015
  • A sleep disorder is being recognized as one of the major health issues related to high levels of stress. At the same time, interests about quality of sleep are rapidly increasing. However, diagnosing sleep disorder is not a simple task because patients should undergo polysomnography test, which requires a long time and high cost. To solve this problem, an accelerometer embedded wrist-worn device is being considered as a simple and low cost solution. However, conventional methods determine a state of user to "sleep" or "wake" according to whether values of individual section's accelerometer data exceed a certain threshold or not. As a result, a high miss-classification rate is observed due to user's intermittent movements while sleeping and tiny movements while awake. In this paper, we propose a novel method that resolves the above problems by employing a dynamic classifier which evaluates a similarity between the neighboring data scores obtained from SVM classifier. A performance of the proposed method is evaluated using 50 data sets and its superiority is verified by achieving 88.9% accuracy, 88.9% sensitivity, and 88.5% specificity.

Analysis Method for Full-length LiDAR Waveforms (라이다 파장 분석 방법론에 대한 연구)

  • Jung, Myung-Hee;Yun, Eui-Jung;Kim, Cheon-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.28-35
    • /
    • 2007
  • Airbone laser altimeters have been utilized for 3D topographic mapping of the earth, moon, and planets with high resolution and accuracy, which is a rapidly growing remote sensing technique that measures the round-trip time emitted laser pulse to determine the topography. The traveling time from the laser scanner to the Earth's surface and back is directly related to the distance of the sensor to the ground. When there are several objects within the travel path of the laser pulse, the reflected laser pluses are distorted by surface variation within the footprint, generating multiple echoes because each target transforms the emitted pulse. The shapes of the received waveforms also contain important information about surface roughness, slope and reflectivity. Waveform processing algorithms parameterize and model the return signal resulting from the interaction of the transmitted laser pulse with the surface. Each of the multiple targets within the footprint can be identified. Assuming each response is gaussian, returns are modeled as a mixture gaussian distribution. Then, the parameters of the model are estimated by LMS Method or EM algorithm However, each response actually shows the skewness in the right side with the slowly decaying tail. For the application to require more accurate analysis, the tail information is to be quantified by an approach to decompose the tail. One method to handle with this problem is proposed in this study.

Data processing system and spatial-temporal reproducibility assessment of GloSea5 model (GloSea5 모델의 자료처리 시스템 구축 및 시·공간적 재현성평가)

  • Moon, Soojin;Han, Soohee;Choi, Kwangsoon;Song, Junghyun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.9
    • /
    • pp.761-771
    • /
    • 2016
  • The GloSea5 (Global Seasonal forecasting system version 5) is provided and operated by the KMA (Korea Meteorological Administration). GloSea5 provides Forecast (FCST) and Hindcast (HCST) data and its horizontal resolution is about 60km (0.83×0.56) in the mid-latitudes. In order to use this data in watershed-scale water management, GloSea5 needs spatial-temporal downscaling. As such, statistical downscaling was used to correct for systematic biases of variables and to improve data reliability. HCST data is provided in ensemble format, and the highest statistical correlation (R2=0.60, RMSE = 88.92, NSE = 0.57) of ensemble precipitation was reported for the Yongdam Dam watershed on the #6 grid. Additionally, the original GloSea5 (600.1 mm) showed the greatest difference (-26.5%) compared to observations (816.1 mm) during the summer flood season. However, downscaled GloSea5 was shown to have only a -3.1% error rate. Most of the underestimated results corresponded to precipitation levels during the flood season and the downscaled GloSea5 showed important results of restoration in precipitation levels. Per the analysis results of spatial autocorrelation using seasonal Moran's I, the spatial distribution was shown to be statistically significant. These results can improve the uncertainty of original GloSea5 and substantiate its spatial-temporal accuracy and validity. The spatial-temporal reproducibility assessment will play a very important role as basic data for watershed-scale water management.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on 50m2 area. The positioning accuracy is ±13cm, the average error on robot angle with the positioning is ±3 degree.

  • PDF

A Spatial Entropy based Decision Tree Method Considering Distribution of Spatial Data (공간 데이터의 분포를 고려한 공간 엔트로피 기반의 의사결정 트리 기법)

  • Jang, Youn-Kyung;You, Byeong-Seob;Lee, Dong-Wook;Cho, Sook-Kyung;Bae, Hae-Young
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.643-652
    • /
    • 2006
  • Decision trees are mainly used for the classification and prediction in data mining. The distribution of spatial data and relationships with their neighborhoods are very important when conducting classification for spatial data mining in the real world. Spatial decision trees in previous works have been designed for reflecting spatial data characteristic by rating Euclidean distance. But it only explains the distance of objects in spatial dimension so that it is hard to represent the distribution of spatial data and their relationships. This paper proposes a decision tree based on spatial entropy that represents the distribution of spatial data with the dispersion and dissimilarity. The dispersion presents the distribution of spatial objects within the belonged class. And dissimilarity indicates the distribution and its relationship with other classes. The rate of dispersion by dissimilarity presents that how related spatial distribution and classified data with non-spatial attributes we. Our experiment evaluates accuracy and building time of a decision tree as compared to previous methods. We achieve an improvement in performance by about 18%, 11%, respectively.