• Title/Summary/Keyword: Pre Processing

Search Result 2,011, Processing Time 0.034 seconds

DEM_Comp Software for Effective Compression of Large DEM Data Sets (대용량 DEM 데이터의 효율적 압축을 위한 DEM_Comp 소프트웨어 개발)

  • Kang, In-Gu;Yun, Hong-Sik;Wei, Gwang-Jae;Lee, Dong-Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.265-271
    • /
    • 2010
  • This paper discusses a new software package, DEM_Comp, developed for effectively compressing large digital elevation model (DEM) data sets based on Lempel-Ziv-Welch (LZW) compression and Huffman coding. DEM_Comp was developed using the $C^{++}$ language running on a Windows-series operating system. DEM_Comp was also tested on various test sites with different territorial attributes, and the results were evaluated. Recently, a high-resolution version of the DEM has been obtained using new equipment and the related technologies of LiDAR (LIght Detection And Radar) and SAR (Synthetic Aperture Radar). DEM compression is useful because it helps reduce the disk space or transmission bandwidth. Generally, data compression is divided into two processes: i) analyzing the relationships in the data and ii) deciding on the compression and storage methods. DEM_Comp was developed using a three-step compression algorithm applying a DEM with a regular grid, Lempel-Ziv compression, and Huffman coding. When pre-processing alone was used on high- and low-relief terrain, the efficiency was approximately 83%, but after completing all three steps of the algorithm, this increased to 97%. Compared with general commercial compression software, these results show approximately 14% better performance. DEM_Comp as developed in this research features a more efficient way of distributing, storing, and managing large high-resolution DEMs.

Salient Object Extraction from Video Sequences using Contrast Map and Motion Information (대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출)

  • Kwak, Soo-Yeong;Ko, Byoung-Chul;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1121-1135
    • /
    • 2005
  • This paper proposes a moving object extraction method using the contrast map and salient points. In order to make the contrast map, we generate three-feature maps such as luminance map, color map and directional map and extract salient points from an image. By using these features, we can decide the Attention Window(AW) location easily The purpose of the AW is to remove the useless regions in the image such as background as well as to reduce the amount of image processing. To create the exact location and flexible size of the AW, we use motion feature instead of pre-assumptions or heuristic parameters. After determining of the AW, we find the difference of edge to inner area from the AW. Then, we can extract horizontal candidate region and vortical candidate region. After finding both horizontal and vertical candidates, intersection regions through logical AND operation are further processed by morphological operations. The proposed algorithm has been applied to many video sequences which have static background like surveillance type of video sequences. The moving object was quite well segmented with accurate boundaries.

A study on the Improvement Method of the Report and Reward System on an Illegal Behavior of the Emergency Exit (비상구 불법행위 신고포상제도의 개선방안에 관한 연구)

  • Kim, Myeong Sik;Lee, Tae Shik;Cho, Won Cheol
    • Journal of Korean Society of Disaster and Security
    • /
    • v.5 no.2
    • /
    • pp.49-59
    • /
    • 2012
  • The safety management of the emergency exit, by directly related to the $civil^{\circ}$Øs dead in the fire situation, have limited by which the fire station take the on-side and control-centered way of business processing, it is expect to the effects in which the citizen have to concern and to take part. From 2010 years in the back-ground, it is operated nationally the report and reward system on an illegal behavior of the emergency exit, it is happened to the unfit operating situation in the mission and direction of the system up which the exit paparazzi act with intent to receive the reward payments. The study suggests solution through analyzing the illegal emergency exit operation result of sixteen counties and the Seoul metropolitan from year 2010 to 2011. Firstly, the report destination is adjusted to the multiple use establishments and the large-scale multiple use facilities over the limit level is limited under five times the report events of the same people in the minor endorsement. And the fine incomes should be invested to the disaster prevention acting related with the exit. Secondly, for upgrade of the report accuracy, a reporter is received the possible information for the confirmation of an illegal act, has become to lead the pre-monitoring act which the reporter is can to take the safety education and to guide the information about season and vulnerable business location. Finally, considering the support way about the encounter facility, the fire officer is not happen to occur the repetitive report in the same place, is related to the volunteer service system the report acts, consider as the volunteer service time, and must support them to act as the disaster prevention volunteer.

Human Visual Perception-Based Quantization For Efficiency HEVC Encoder (HEVC 부호화기 고효율 압축을 위한 인지시각 특징기반 양자화 방법)

  • Kim, Young-Woong;Ahn, Yong-Jo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.28-41
    • /
    • 2017
  • In this paper, the fast encoding algorithm in High Efficiency Video Coding (HEVC) encoder was studied. For the encoding efficiency, the current HEVC reference software is divided the input image into Coding Tree Unit (CTU). then, it should be re-divided into CU up to maximum depth in form of quad-tree for RDO (Rate-Distortion Optimization) in encoding precess. But, it is one of the reason why complexity is high in the encoding precess. In this paper, to reduce the high complexity in the encoding process, it proposed the method by determining the maximum depth of the CU using a hierarchical clustering at the pre-processing. The hierarchical clustering results represented an average combination of motion vectors (MV) on neighboring blocks. Experimental results showed that the proposed method could achieve an average of 16% time saving with minimal BD-rate loss at 1080p video resolution. When combined the previous fast algorithm, the proposed method could achieve an average 45.13% time saving with 1.84% BD-rate loss.

A Robust Staff Line Height and Staff Line Space Estimation for the Preprocessing of Music Score Recognition (악보인식 전처리를 위한 강건한 오선 두께와 간격 추정 방법)

  • Na, In-Seop;Kim, Soo-Hyung;Nquyen, Trung Quy
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.29-37
    • /
    • 2015
  • In this paper, we propose a robust pre-processing module for camera-based Optical Music Score Recognition (OMR) on mobile device. The captured images likely suffer for recognition from many distortions such as illumination, blur, low resolution, etc. Especially, the complex background music sheets recognition are difficult. Through any symbol recognition system, the staff line height and staff line space are used many times and have a big impact on recognition module. A robust and accurate staff line height and staff line space are essential. Some staff line height and staff line space are proposed for binary image. But in case of complex background music sheet image, the binarization results from common binarization algorithm are not satisfactory. It can cause incorrect staff line height and staff line space estimation. We propose a robust staff line height and staff line space estimation by using run-length encoding technique on edge image. Proposed method is composed of two steps, first step, we conducted the staff line height and staff line space estimation based on edge image using by Sobel operator on image blocks. Each column of edge image is encoded by run-length encoding algorithm Second step, we detect the staff line using by Stable Path algorithm and removal the staff line using by adaptive Line Track Height algorithm which is to track the staff lines positions. The result has shown that robust and accurate estimation is possible even in complex background cases.

A Study on Workbench-based Dynamic Service De-sign and Construction of Computational Science Engineering Platform (계산과학공학 플랫폼의 워크벤치 기반 동적 서비스 설계 및 구축에 관한 연구)

  • Kwon, Yejin;Jeon, Inho;Ma, Jin;Lee, Sik;Cho, Kum Won;Seo, Jerry
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.57-66
    • /
    • 2018
  • EDISON ( EDucation-research Integration through Simulation On the Net) is a web simulation service based on cloud compu-ting. EDISON provides that web service and provide analysis result to users through pre-built infrastructure and various calcu-lation nodes computational science engineering problems that are difficult or impossible to analysis as user's personal resources to users. As a result, a simulation execution environment is provided in a web portal environment so that EDISON can be ac-cessed regardless of a user's device or operating system to perform computational science engineering analysis simulation. The purpose of this research is to design and construct a workbench based real - time dynamic service to provide an integrat-ed user interface to the EDSION system, which is a computational science engineering simulation and analysis platform, which is currently provided to users. We also devised a workbench-based user simulation service environment configuration. That has a user interface that is similar to the computational science engineering simulation software environment used locally. It can configure a dynamic web environment such as various analyzers, preprocessors, and simulation software. In order to provide these web services, the service required by the user is configured in portlet units, and as a result, the simulation service using the workbench is constructed.

Shear Bond Strength of Veneering Ceramic and Zirconia Core according to the Surface Treatments (지르코니아 코어의 표면처리 방법에 따른 도재 축성의 전단결합강도)

  • Sin, Cheon-Ho;Hwang, Seong-Sig;Han, Gyeong-Soon
    • Journal of dental hygiene science
    • /
    • v.13 no.4
    • /
    • pp.487-492
    • /
    • 2013
  • This study aimed to illuminate the correlatives between the surface processing of Zirconia core and the shear bond strength. The specimens were made by precipitating for two minutes in color liquid and drying to produce a colored Zirconia core following the manufacturer's instructions. The specimens were divided into 4 subgroups according to the surface treatment-sandblasted+liner treatment, SLT group; sandblasted treatment, ST group; liner treatment, LT group; non treatment (control), NT group. The specimens were put on the device with regard to ISO/TS 11405, then tested the shear bond strength with 1 mm shearing speed per minute using the Instron multi-purpose tester. The collected data was analysed by one way ANOVA and t-test. After applying the liner and sandblast to the Zirconia core, shear bond strength value were SLT (23.19 MPa), ST (21.17 MPa), LT (20.53 MPa) and NT (16.46 MPa) in the order. There was a significant difference in the surface roughness between NT and ST group (p<0.001), and in the compressive shear bond strength between NT and SLT group (p<0.05). Therefore, sandblasted plus liner treatment on pre-sintered substructure increased the bond strength of veneering ceramic, compared with other surface treatments.

Locally adaptive intelligent interpolation for population distribution modeling using pre-classified land cover data and geographically weighted regression (지표피복 데이터와 지리가중회귀모형을 이용한 인구분포 추정에 관한 연구)

  • Kim, Hwahwan
    • Journal of the Korean association of regional geographers
    • /
    • v.22 no.1
    • /
    • pp.251-266
    • /
    • 2016
  • Intelligent interpolation methods such as dasymetric mapping are considered to be the best way to disaggregate zone-based population data by observing and utilizing the internal variation within each source zone. This research reviews the advantages and problems of the dasymetric mapping method, and presents a geographically weighted regression (GWR) based method to take into consideration the spatial heterogeneity of population density - land cover relationship. The locally adaptive intelligent interpolation method is able to make use of readily available ancillary information in the public domain without the need for additional data processing. In the case study, we use the preclassified National Land Cover Dataset 2011 to test the performance of the proposed method (i.e. the GWR-based multi-class dasymetric method) compared to four other popular population estimation methods (i.e. areal weighting interpolation, pycnophylactic interpolation, binary dasymetric method, and globally fitted ordinary least squares (OLS) based multi-class dasymetric method). The GWR-based multi-class dasymetric method outperforms all other methods. It is attributed to the fact that spatial heterogeneity is accounted for in the process of determining density parameters for land cover classes.

  • PDF

Edge-based spatial descriptor for content-based Image retrieval (내용 기반 영상 검색을 위한 에지 기반의 공간 기술자)

  • Kim, Nac-Woo;Kim, Tae-Yong;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.1-10
    • /
    • 2005
  • Content-based image retrieval systems are being actively investigated owing to their ability to retrieve images based on the actual visual content rather than by manually associated textual descriptions. In this paper, we propose a novel approach for image retrieval based on edge structural features using edge correlogram and color coherence vector. After color vector angle is applied in the pre-processing stage, an image is divided into two image parts (high frequency image and low frequency image). In low frequency image, the global color distribution of smooth pixels is extracted by color coherence vector, thereby incorporating spatial information into the proposed color descriptor. Meanwhile, in high frequency image, the distribution of the gray pairs at an edge is extracted by edge correlogram. Since the proposed algorithm includes the spatial and edge information between colors, it can robustly reduce the effect of the significant change in appearance and shape in image analysis. The proposed method provides a simple and flexible description for the image with complex scene in terms of structural features of the image contents. Experimental evidence suggests that our algorithm outperforms the recently histogram refinement methods for image indexing and retrieval. To index the multidimensional feature vectors, we use R*-tree structure.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF