• Title/Summary/Keyword: 선별 알고리즘

Search Result 292, Processing Time 0.029 seconds

A Blind Watermarking Algorithm using CABAC for H.264/AVC Main Profile (H.264/AVC Main Profile을 위한 CABAC-기반의 블라인드 워터마킹 알고리즘)

  • Seo, Young-Ho;Choi, Hyun-Jun;Lee, Chang-Yeul;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.181-188
    • /
    • 2007
  • This paper proposed a watermark embedding/extracting method using CABAC(Context-based Adaptive Binary Arithmetic Coding) which is the entropy encoder for the main profile of MPEG-4 Part 10 H.264/AVC. This algorithm selects the blocks and the coefficients in a block on the bases of the contexts extracted from the relationship to the adjacent blocks and coefficients. A watermark bit is embedded without any modification of coefficient or with replacing the LSB(Least Significant Bit) of the coefficient with a watermark bit by considering both the absolute value of the selected coefficient and the watermark bit. Therefore, it makes it hard for an attacker to find out the watermarked locations. By selecting a few coefficients near the DC coefficient according to the contexts, this algorithm satisfies the robustness requirement. From the results from experiments with various kinds and various strengths of attacks the maximum error ratio of the extracted watermark was 5.02% in maximum, which makes certain that the proposed algorithm has very high level of robustness. Because it embeds the watermark during the context modeling and binarization process of CABAC, the additional amount of calculation for locating and selecting the coefficients to embed watermark is very small. Consequently, it is highly expected that it is very useful in the application area that the video must be compressed right after acquisition.

Assessing the Impact of Sampling Intensity on Land Use and Land Cover Estimation Using High-Resolution Aerial Images and Deep Learning Algorithms (고해상도 항공 영상과 딥러닝 알고리즘을 이용한 표본강도에 따른 토지이용 및 토지피복 면적 추정)

  • Yong-Kyu Lee;Woo-Dam Sim;Jung-Soo Lee
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.3
    • /
    • pp.267-279
    • /
    • 2023
  • This research assessed the feasibility of using high-resolution aerial images and deep learning algorithms for estimating the land-use and land-cover areas at the Approach 3 level, as outlined by the Intergovernmental Panel on Climate Change. The results from different sampling densities of high-resolution (51 cm) aerial images were compared with the land-cover map, provided by the Ministry of Environment, and analyzed to estimate the accuracy of the land-use and land-cover areas. Transfer learning was applied to the VGG16 architecture for the deep learning model, and sampling densities of 4 × 4 km, 2 × 4 km, 2 × 2 km, 1 × 2 km, 1 × 1 km, 500 × 500 m, and 250 × 250 m were used for estimating and evaluating the areas. The overall accuracy and kappa coefficient of the deep learning model were 91.1% and 88.8%, respectively. The F-scores, except for the pasture category, were >90% for all categories, indicating superior accuracy of the model. Chi-square tests of the sampling densities showed no significant difference in the area ratios of the land-cover map provided by the Ministry of Environment among all sampling densities except for 4 × 4 km at a significance level of p = 0.1. As the sampling density increased, the standard error and relative efficiency decreased. The relative standard error decreased to ≤15% for all land-cover categories at 1 × 1 km sampling density. These results indicated that a sampling density more detailed than 1 x 1 km is appropriate for estimating land-cover area at the local level.

An Effective Microcalcification Detection in Digitized Mammograms Using Morphological Analysis and Multi-stage Neural Network (디지털 마모그램에서 형태적 분석과 다단 신경 회로망을 이용한 효율적인 미소석회질 검출)

  • Shin, Jin-Wook;Yoon, Sook;Park, Dong-Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.3C
    • /
    • pp.374-386
    • /
    • 2004
  • The mammogram provides the way to observe detailed internal organization of breasts to radiologists for the early detection. This paper is mainly focused on efficiently detecting the Microcalcification's Region Of Interest(ROI)s. Breast cancers can be caused from either microcalcifications or masses. Microcalcifications are appeared in a digital mammogram as tiny dots that have a little higher gray levels than their surrounding pixels. We can roughly determine the area which possibly contain microcalifications. In general, it is very challenging to find all the microcalcifications in a digital mammogram, because they are similar to some tissue parts of a breast. To efficiently detect microcalcifications ROI, we used four sequential processes; preprocessing for breast area detection, modified multilevel thresholding, ROI selection using simple thresholding filters and final ROI selection with two stages of neural networks. The filtering process with boundary conditions removes easily-distinguishable tissues while keeping all microcalcifications so that it cleans the thresholded mammogram images and speeds up the later processing by the average of 86%. The first neural network shows the average of 96.66% recognition rate. The second neural network performs better by showing the average recognition rate 98.26%. By removing all tissues while keeping microcalcifications as much as possible, the next parts of a CAD system for detecting breast cancers can become much simpler.

Efficient Skew Estimation for Document Images Based on Selective Attention (선택적 주의집중에 의한 문서영상의 효율적인 기울어짐 추정)

  • Gwak, Hui-Gyu;Kim, Su-Hyeong
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.10
    • /
    • pp.1193-1203
    • /
    • 1999
  • 본 논문에서는 한글과 영문 문서 영상들에 대한 기울어짐 추정(skew estimation) 알고리즘을 제안한다. 제안 방법은 전체 문서 영상에서 텍스트 요소들이 밀집되어 있는 영역을 선별하고, 선별된 영역에 대해 허프 변환을 적용하는 선택적 주의집중(selective attention) 방식을 채택한다. 제안 방법의 기울기 추정 과정은 2단계로 구성되는데, coarse 단계에서는 전체 영상을 몇 개의 영역으로 나누고 동일한 영역에 속하는 데이타들간의 연결 각도를 계산하여 각 영역별 accumulator에 저장한다. accumulator에 저장된 빈도치를 기준으로 $\pm$45$^{\circ}$범위 내에서 최대 $\pm$1$^{\circ}$의 오차를 가진 각 영역별 기울기를 계산한 후, 이들 중 최대 빈도값을 갖는 영역을 선정하고 그 영역의 기울기 각도를 문서 영상의 대략적인 기울기 각도로 결정한다. Refine 단계에서는 coarse 단계에서 선정된 영역에 허프 변환을 적용하여 정확한 기울기를 계산하는데, coarse 단계에서 추정한 기울기의 $\pm$1$^{\circ}$범위 내에서 0.1$^{\circ}$간격으로 측정한다. 이와 같은 선택적 주의집중 방식을 통해 기울기 추정에 소요되는 시간 비용은 최소화하고, 추정의 정확도는 최대화 할 수 있다.제안 방법의 성능 평가를 위한 실험은 다양한 형태의 영문과 한글 문서 영상 2,016개에 적용되었다. 제안 방법의 평균 수행 시간은 Pentium 200MHz PC에서 0.19초이고 평균 오차는 $\pm$0.08$^{\circ}$이다. 또한 기존의 기울기 추정 방법과 제안 방법의 성능을 비교하여 제안 방법의 우수성을 입증하였다.Abstract In this paper we propose a skew estimation algorithm for English and Korean document images. The proposed method adopts a selective attention strategy, in which we choose a region of interest which contains a cluster of text components and then apply a Hough transform to this region. The skew estimation process consists of two steps. In the coarse step, we divide the entire image into several regions, and compute the skew angle of each region by accumulating the slopes of lines connecting any two components in the region. The skew angle is estimated within the range of $\pm$45 degree with a maximum error of $\pm$1 degree. Next we select a region which has the most frequent slope in the accumulators and determine the skew angle of the image roughly as the angle corresponding to the most frequent slope. In the refine step, a Hough transform is applied for the selected region within the range of $\pm$1 degree along the angle computed from the coarse step, with an angular resolution of 0.1 degree. Based on this selective attention strategy, we can minimize the time cost and maximize the accuracy of the skew estimation.We have measured the performance of the proposed method by an experiment with 2,016 images of various English and Korean documents. The average run time is 0.19 second on a Pentium 200MHz PC, and the average error is $\pm$0.08 degree. We also have proven the superiority of our algorithm by comparing the performance with that of other well-known methods in the literature.

Access Frequency Based Selective Buffer Cache Management Strategy For Multimedia News Data (접근 요청 빈도에 기반한 멀티미디어 뉴스 데이터의 선별적 버퍼 캐쉬 관리 전략)

  • Park, Yong-Un;Seo, Won-Il;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2524-2532
    • /
    • 1999
  • In this paper, we present a new buffer pool management scheme designed for video type news objects to build a cost-effective News On Demand storage server for serving users requests beyond the limitation of disk bandwidth. In a News On Demand Server where many of users request for video type news objects have to be serviced keeping their playback deadline, the maximum numbers of concurrent users are limited by the maximum disk bandwidth the server provides. With our proposed buffer cache management scheme, a requested data is checked to see whether or not it is worthy of caching by checking its average arrival interval and current disk traffic density. Subsequently, only granted news objects are permitted to get into the buffer pool, where buffer allocation is made not on the block basis but on the object basis. We evaluated the performance of our proposed caching algorithm through simulation. As a result of the simulation, we show that by using this caching scheme to support users requests for real time news data, compared with serving those requests only by disks, 30% of extra requests are served without additional cost increase.

  • PDF

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

Moving Target Tracking Algorithm based on the Confidence Measure of Motion Vectors (움직임 벡터의 신뢰도에 기반한 이동 목표물 추적 기법)

  • Lee, Jin-Seong;Lee, Gwang-Yeon;Kim, Seong-Dae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.2
    • /
    • pp.160-168
    • /
    • 2001
  • Change detection using difference picture has been used to detect the location of moving targets and to track them. This method needs the assumption of static camera, and the global motion compensation is required in case of a moving camera. This paper suggests a method for finding a minimum bounding rectangles(MBR) of moving targets in the image sequences using moving region detection, especially with a moving camera. If the global motion parameter is inaccurately estimated, the estimated locations of targets will be accurate either To alleviate this problem, we introduce the concept of the confidence measure and achieve more accurate estimation of global motion. Experimental results show that the proposed method successfully removes background region and extracts MBRs of the targets. Even with a moving camera, the new global motion estimation algorithm performs more precise]y and it reduces the background compensation errors of change detection.

  • PDF

Passing Behavior of Vehicles in Signalized Intersection (Focused on Vehicles Driven by Offensive Drivers) (신호교차로에서 차량 통과특성 연구 (공격적인 운전자가 운전하는 차량을 중심으로))

  • Hwang, Kyung-Soo;Hwang, Zun-Hwan;Kim, Jum-San;Rhee, Sung-Mo
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.2 s.73
    • /
    • pp.103-108
    • /
    • 2004
  • The motivation of this study comes from the problem recognition that the headway of passing vehicle in signalized intersections can not be merely determined by departing sequence. Traffic speed and headway data of passing vehicles in signalized intersection have been obtained by using magnetic detectors(NC 97) and detecting program, and the data was analyzed. Without special treatment, the model established on passing behavior of vehicles was meaningless from statistical view point. Hence, special treatments such as filtering (upper 85% offensive driver driven vehicle's) and log scaling of data were carried on. With this new data, meaningful model (where coefficient of determination is 0.91) was established. This model explained the fact that vehicle headway in signalized intersection is affected by speed and headway of previous vehicle and speed of itself.

Statistical Approach to Sentiment Classification using MapReduce (맵리듀스를 이용한 통계적 접근의 감성 분류)

  • Kang, Mun-Su;Baek, Seung-Hee;Choi, Young-Sik
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.425-440
    • /
    • 2012
  • As the scale of the internet grows, the amount of subjective data increases. Thus, A need to classify automatically subjective data arises. Sentiment classification is a classification of subjective data by various types of sentiments. The sentiment classification researches have been studied focused on NLP(Natural Language Processing) and sentiment word dictionary. The former sentiment classification researches have two critical problems. First, the performance of morpheme analysis in NLP have fallen short of expectations. Second, it is not easy to choose sentiment words and determine how much a word has a sentiment. To solve these problems, this paper suggests a combination of using web-scale data and a statistical approach to sentiment classification. The proposed method of this paper is using statistics of words from web-scale data, rather than finding a meaning of a word. This approach differs from the former researches depended on NLP algorithms, it focuses on data. Hadoop and MapReduce will be used to handle web-scale data.

  • PDF

A Study on Contents-based Retrieval using Wavelet (Wavelet을 이용한 내용기반 검색에 관한 연구)

  • 강진석;박재필;나인호;최연성;김장형
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1051-1066
    • /
    • 2000
  • According to the recent advances of digital encoding technologies and computing power, large amounts of multimedia informations such as image, graphic, audio and video are fully used in multimedia systems through Internet. By this, diverse retrieval mechanisms are required for users to search dedicated informations stored in multimedia systems, and especially it is preferred to use contents-based retrieval method rather than text-type keyword retrieval method. In this paper, we propose a new contents-based indexing and searching algorithm which aims to get both high efficiency and high retrieval performance. To achieve these objectives, firstly the proposed algorithm classifies images by a pre-processing process of edge extraction, range division, and multiple filtering, and secondly it searches the target images using spatial and textural characteristics of colors, which are extracted from the previous process, in a image. In addition, we describe the simulation results of search requests and retrieval outputs for several images of company's trade-mark using the proposed contents-based retrieval algorithm based on wavelet.

  • PDF