• Title/Summary/Keyword: entropy test

Search Result 129, Processing Time 0.023 seconds

Fast Inverse Transform Considering Multiplications (곱셈 연산을 고려한 고속 역변환 방법)

  • Hyeonju Song;Yung-Lyul Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.100-108
    • /
    • 2023
  • In hybrid block-based video coding, transform coding converts spatial domain residual signals into frequency domain data and concentrates energy in a low frequency band to achieve a high compression efficiency in entropy coding. The state-of-the-art video coding standard, VVC(Versatile Video Coding), uses DCT-2(Discrete Cosine Transform type 2), DST-7(Discrete Sine Transform type 7), and DCT-8(Discrete Cosine Transform type 8) for primary transform. In this paper, considering that DCT-2, DST-7, and DCT-8 are all linear transformations, we propose an inverse transform that reduces the number of multiplications in the inverse transform by using the linearity of the linear transform. The proposed inverse transform method reduced encoding time and decoding time by an average 26%, 15% in AI and 4%, 10% in RA without the increase of bitrate compared to VTM-8.2.

High Speed Rail Station Distric Using Entropy Model Study to Estimate the Trip Distribution (엔트로피 모형을 활용한 고속철도 역세권 통행분포 추정에 관한 연구)

  • Cho, Hangung;Kim, Sigon;Kim, Jinhowan;Jeon, Sangmin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.6D
    • /
    • pp.679-686
    • /
    • 2012
  • KTX step 1 April 2004, after the opening, the second phase of the project was opened in November 2010. High-speed rail after the opening and continue to increase the demand of high-speed rail, Have the speed of competitive advantage compared too the means of transportation. The opening of these high-speed rail has led to changes of the move, the company's position, and the spatial structure of the population of reorganization, such as the social, economic, transportation. In this study, survey data using the High Speed Rail Station EMME/2 of the program to take advantage of the 2-Dimentional Blancing trip distribution to investigate the passage through the trip distribution by the estimation of the parameters of the model to estimate the distribution of the means of access and high-speed rail station to reproduce and Analysis of the results by means of access parameters (${\theta}$) autos 0.0395, buses 0.0390, subway 0.0650, taxi 0.0415, the frequency distribution (Trip Length Frequency Distribution: TLFD) were analyzed survey data value model with the results of comparing $R^2$ cars analysis and model values similar survey data 0.909 bus 0.923, subway 0.745 to 0.922, taxi, F test P value analysis is smaller than 0.05 at the 95% confidence level as a note that was judged to have been. Trip frequency distribution analysis, but in the future, set the unit to 5km-trip frequency distribution middle zone Units from small zone units (administrative district) segmentation research is needed, and can reflect the trip distance 0~5 km interval combined function to take advantage of the gravity model and the 3-Dimentional Blancing applied research is needed to be considered.

Texture Feature analysis using Computed Tomography Imaging in Fatty Liver Disease Patients (Fatty Liver 환자의 컴퓨터단층촬영 영상을 이용한 질감특징분석)

  • Park, Hyong-Hu;Park, Ji-Koon;Choi, Il-Hong;Kang, Sang-Sik;Noh, Si-Cheol;Jung, Bong-Jae
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.2
    • /
    • pp.81-87
    • /
    • 2016
  • In this study we proposed a texture feature analysis algorithm that distinguishes between a normal image and a diseased image using CT images of some fatty liver patients, and generates both Eigen images and test images which can be applied to the proposed computer aided diagnosis system in order to perform a quantitative analysis for 6 parameters. And through the analysis, we derived and evaluated the recognition rate of CT images of fatty liver. As the results of examining over 30 example CT images of fatty liver, the recognition rates representing a specific texture feature-value are as follows: some appeared to be as high as 100% including Average Gray Level, Entropy 96.67%, Skewness 93.33%, and Smoothness while others showed a little low disease recognition rate: 83.33% for Uniformity 86.67% and for Average Contrast 80%. Consequently, based on this research result, if a software that enables a computer aided diagnosis system for medical images is developed, it will lead to the availability for the automatic detection of a diseased spot in CT images of fatty liver and quantitative analysis. And they can be used as computer aided diagnosis data, resulting in the increased accuracy and the shortened time in the stage of final reading.

Detection Efficiency of Microcalcification using Computer Aided Diagnosis in the Breast Ultrasonography Images (컴퓨터보조진단을 이용한 유방 초음파영상에서의 미세석회화 검출 효율)

  • Lee, Jin-Soo;Ko, Seong-Jin;Kang, Se-Sik;Kim, Jung-Hoon;Park, Hyung-Hu;Choi, Seok-Yoon;Kim, Chang-Soo
    • Journal of radiological science and technology
    • /
    • v.35 no.3
    • /
    • pp.227-235
    • /
    • 2012
  • Digital Mammography makes it possible to reproduce the entire breast image. And it is used to detect microcalcification and mass which are the most important point of view of nonpalpable early breast cancer, so it has been used as the primary screening test of breast disease. It is reported that microcalcification of breast lesion is important in diagnosis of early breast cancer. In this study, six types of texture features algorithms are used to detect microcalcification on breast US images and the study has analyzed recognition rate of lesion between normal US images and other US images which microcalification is seen. As a result of the experiment, Computer aided diagnosis recognition rate that distinguishes mammography and breast US disease was considerably high 70~98%. The average contrast and entropy parameters were low in ROC analysis, but sensitivity and specificity of four types parameters were over 90%. Therefore it is possible to detect microcalcification on US images. If not only six types of texture features algorithms but also the research of additional parameter algorithm is being continually proceeded and basis of practical use on CAD is being prepared, it can be a important meaning as pre-reading. Also, it is considered very useful things for early diagnosis of breast cancer.

Acquisition of Intrinsic Image by Omnidirectional Projection of ROI and Translation of White Patch on the X-chromaticity Space (X-색도 공간에서 ROI의 전방향 프로젝션과 백색패치의 평행이동에 의한 본질 영상 획득)

  • Kim, Dal-Hyoun;Hwang, Dong-Guk;Lee, Woo-Ram;Jun, Byoung-Min
    • The KIPS Transactions:PartB
    • /
    • v.18B no.2
    • /
    • pp.51-56
    • /
    • 2011
  • Algorithms for intrinsic images reduce color differences in RGB images caused by the temperature of black-body radiators. Based on the reference light and detecting single invariant direction, these algorithms are weak in real images which can have multiple invariant directions when the scene illuminant is a colored illuminant. To solve these problems, this paper proposes a method of acquiring an intrinsic image by omnidirectional projection of an ROI and a translation of white patch in the ${\chi}$-chromaticity space. Because it is not easy to analyze an image in the three-dimensional RGB space, the ${\chi}$-chromaticity is also employed without the brightness factor in this paper. After the effect of the colored illuminant is decreased by a translation of white patch, an invariant direction is detected by omnidirectional projection of an ROI in this chromaticity space. In case the RGB image has multiple invariant directions, only one ROI is selected with the bin, which has the highest frequency in 3D histogram. And then the two operations, projection and inverse transformation, make intrinsic image acquired. In the experiments, test images were four datasets presented by Ebner and evaluation methods was the follows: standard deviation of the invariant direction, the constancy measure, the color space measure and the color constancy measure. The experimental results showed that the proposed method had lower standard deviation than the entropy, that its performance was two times higher than the compared algorithm.

A Design and Implementation of a Content_Based Image Retrieval System using Color Space and Keywords (칼라공간과 키워드를 이용한 내용기반 화상검색 시스템 설계 및 구현)

  • Kim, Cheol-Ueon;Choi, Ki-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1418-1432
    • /
    • 1997
  • Most general content_based image retrieval techniques use color and texture as retrieval indices. In color techniques, color histogram and color pair based color retrieval techniques suffer from a lack of spatial information and text. And This paper describes the design and implementation of content_based image retrieval system using color space and keywords. The preprocessor for image retrieval has used the coordinate system of the existing HSI(Hue, Saturation, Intensity) and preformed to split One image into chromatic region and achromatic region respectively, It is necessary to normalize the size of image for 200*N or N*200 and to convert true colors into 256 color. Two color histograms for background and object are used in order to decide on color selection in the color space. Spatial information is obtained using a maximum entropy discretization. It is possible to choose the class, color, shape, location and size of image by using keyword. An input color is limited by 15 kinds keyword of chromatic and achromatic colors of the Korea Industrial Standards. Image retrieval method is used as the key of retrieval properties in the similarity. The weight values of color space ${\alpha}(%)and\;keyword\;{\beta}(%)$ can be chosen by the user in inputting the query words, controlling the values according to the properties of image_contents. The result of retrieval in the test using extracted feature such as color space and keyword to the query image are lower that those of weight value. In the case of weight value, the average of te measuring parameters shows approximate Precision(0.858), Recall(0.936), RT(1), MT(0). The above results have proved higher retrieval effects than the content_based image retrieval by using color space of keywords.

  • PDF

Texture Feature Analysis Using a Brain Hemorrhage Patient CT Images (전산화단층촬영 영상을 이용한 뇌출혈 질감특징분석)

  • Park, Hyonghu;Park, Jikoon;Choi, Ilhong;Kang, Sangsik;Noh, Sicheol;Jung, Bongjae
    • Journal of the Korean Society of Radiology
    • /
    • v.9 no.6
    • /
    • pp.369-374
    • /
    • 2015
  • In this study we proposed a texture feature analysis algorithm that distinguishes between a normal image and a diseased image using CT images of some brain hemorrhage patients, and generates both Eigen images and test images which can be applied to the proposed computer aided diagnosis system in order to perform a quantitative analysis for 6 parameters. And through the analysis, we derived and evaluated the recognition rate of CT images of brain hemorrhage. As the results of examining over 40 example CT images of brain hemorrhage, the recognition rates representing a specific texture feature-value are as follows: some appeared to be as high as 100% including average gray level, average contrast, smoothness, and Skewness while others showed a little low disease recognition rate: 95% for uniformity and 87.5% for entropy. Consequently, based on this research result, if a software that enables a computer aided diagnosis system for medical images is developed, it will lead to the availability for the automatic detection of a diseased spot in CT images of brain hemorrhage and quantitative analysis. And they can be used as computer aided diagnosis data, resulting in the increased accuracy and the shortened time in the stage of final reading.

Adaptive Data Hiding Techniques for Secure Communication of Images (영상 보안통신을 위한 적응적인 데이터 은닉 기술)

  • 서영호;김수민;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.664-672
    • /
    • 2004
  • Widespread popularity of wireless data communication devices, coupled with the availability of higher bandwidths, has led to an increased user demand for content-rich media such as images and videos. Since such content often tends to be private, sensitive, or paid for, there exists a requirement for securing such communication. However, solutions that rely only on traditional compute-intensive security mechanisms are unsuitable for resource-constrained wireless and embedded devices. In this paper, we propose a selective partial image encryption scheme for image data hiding , which enables highly efficient secure communication of image data to and from resource constrained wireless devices. The encryption scheme is invoked during the image compression process, with the encryption being performed between the quantizer and the entropy coder stages. Three data selection schemes are proposed: subband selection, data bit selection and random selection. We show that these schemes make secure communication of images feasible for constrained embed-ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of data hiding achieved with the computation requirements imposed on the wireless devices. Experiments conducted on over 500 test images reveal that, by using our techniques, the fraction of data to be encrypted with our scheme varies between 0.0244% and 0.39% of the original image size. The peak signal to noise ratios (PSNR) of the encrypted image were observed to vary between about 9.5㏈ to 7.5㏈. In addition, visual test indicate that our schemes are capable of providing a high degree of data hiding with much lower computational costs.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.