• Title/Summary/Keyword: entropy image

Search Result 347, Processing Time 0.03 seconds

A study on the semi-public space and spatial hierarchy understood from the viewpoint of new paradigm (뉴 패러다임 관점에서 해석한 공간의 위계구조와 준공적 공간에 관한 연구)

  • 신문영
    • Archives of design research
    • /
    • no.16
    • /
    • pp.27-38
    • /
    • 1996
  • Environmental design is the process of creating and suggesting a new culture retlectmg the spirits and scientific know ledges of an age, so il is important for a designer, who deals environment, to perceive the present trend of science, The goal of this study is to suggest a way to recover the vanishing image in current urban environment from the viewpoint of changing world-view, The process of this study is as follows. 1. According to spatial hierarchy, the role of each space and the importance of each space in correlation with human are considered. 2. The method to undersk, nd the space from the viewpoint of new paradigm and the direction of environmental design's access are suggested. 3. The notion that introduction of semi public space In urban environment is consistent with new paradigm is demonstrated and the semi public space's role of stimulation of urban activity is emphasized. The result of this study shows a possibility that semi public space, introduced by understanding of a space on the basis of new paradigm, expands the territory of life and overcomes tile negative environmental problem like disorder, increase of entropy.

  • PDF

An Optimal Cluster Analysis Method with Fuzzy Performance Measures (퍼지 성능 측정자를 결합한 최적 클러스터 분석방법)

  • 이현숙;오경환
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.6 no.3
    • /
    • pp.81-88
    • /
    • 1996
  • Cluster analysis is based on partitioning a collection of data points into a number of clusters, where the data points in side a cluster have a certain degree of similarity and it is a fundamental process of data analysis. So, it has been playing an important role in solving many problems in pattern recognition and image processing. For these many clustering algorithms depending on distance criteria have been developed and fuzzy set theory has been introduced to reflect the description of real data, where boundaries might be fuzzy. If fuzzy cluster analysis is tomake a significant contribution to engineering applications, much more attention must be paid to fundamental questions of cluster validity problem which is how well it has identified the structure that is present in the data. Several validity functionals such as partition coefficient, claasification entropy and proportion exponent, have been used for measuring validity mathematically. But the issue of cluster validity involves complex aspects, it is difficult to measure it with one measuring function as the conventional study. In this paper, we propose four performance indices and the way to measure the quality of clustering formed by given learning strategy.

  • PDF

Design of CAVLC Decoder for H.264/AVC (H.264/AVC용 CAVLC 디코더의 설계)

  • Jung, Duck-Young;Sonh, Seung-Il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.6
    • /
    • pp.1104-1114
    • /
    • 2007
  • Digital video compression technique has played an important role that enables efficient transmission and storage of multimedia data where bandwidth and storage space are limited. The new video coding standard, H.264/AVC, developed by Joint Video Team(JVT) significantly outperforms previous standards in compression performance. Especially, variable length code(VLC) plays a crucial pun in video and image compression applications. H.264/AVC standard adopted Context-based Adaptive Variable Length Coding(CAVLC) as the entropy coding method. CAVLC of H.264/AVC requires a large number of the memory accesses. This is a serious problem for applications such as DMB and video phone service because of the considerable amount of power that is consumed in accessing the memory. In order to overcome this problem in this paper, we propose a variable length technique that implements memory-free coeff_token, level, and run_before decoding based on arithmetic operations and using only 70% of the required memory at total_zero variable length decoding.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Development of Semi-Supervised Deep Domain Adaptation Based Face Recognition Using Only a Single Training Sample (단일 훈련 샘플만을 활용하는 준-지도학습 심층 도메인 적응 기반 얼굴인식 기술 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1375-1385
    • /
    • 2022
  • In this paper, we propose a semi-supervised domain adaptation solution to deal with practical face recognition (FR) scenarios where a single face image for each target identity (to be recognized) is only available in the training phase. Main goal of the proposed method is to reduce the discrepancy between the target and the source domain face images, which ultimately improves FR performances. The proposed method is based on the Domain Adatation network (DAN) using an MMD loss function to reduce the discrepancy between domains. In order to train more effectively, we develop a novel loss function learning strategy in which MMD loss and cross-entropy loss functions are adopted by using different weights according to the progress of each epoch during the learning. The proposed weight adoptation focuses on the training of the source domain in the initial learning phase to learn facial feature information such as eyes, nose, and mouth. After the initial learning is completed, the resulting feature information is used to training a deep network using the target domain images. To evaluate the effectiveness of the proposed method, FR performances were evaluated with pretrained model trained only with CASIA-webface (source images) and fine-tuned model trained only with FERET's gallery (target images) under the same FR scenarios. The experimental results showed that the proposed semi-supervised domain adaptation can be improved by 24.78% compared to the pre-trained model and 28.42% compared to the fine-tuned model. In addition, the proposed method outperformed other state-of-the-arts domain adaptation approaches by 9.41%.

Performance Evaluation of ResNet-based Pneumonia Detection Model with the Small Number of Layers Using Chest X-ray Images (흉부 X선 영상을 이용한 작은 층수 ResNet 기반 폐렴 진단 모델의 성능 평가)

  • Youngeun Choi;Seungwan Lee
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.277-285
    • /
    • 2023
  • In this study, pneumonia identification networks with the small number of layers were constructed by using chest X-ray images. The networks had similar trainable-parameters, and the performance of the trained models was quantitatively evaluated with the modification of the network architectures. A total of 6 networks were constructed: convolutional neural network (CNN), VGGNet, GoogleNet, residual network with identity blocks, ResNet with bottleneck blocks and ResNet with identity and bottleneck blocks. Trainable parameters for the 6 networks were set in a range of 273,921-294,817 by adjusting the output channels of convolution layers. The network training was implemented with binary cross entropy (BCE) loss function, sigmoid activation function, adaptive moment estimation (Adam) optimizer and 100 epochs. The performance of the trained models was evaluated in terms of training time, accuracy, precision, recall, specificity and F1-score. The results showed that the trained models with the small number of layers precisely detect pneumonia from chest X-ray images. In particular, the overall quantitative performance of the trained models based on the ResNets was above 0.9, and the performance levels were similar or superior to those based on the CNN, VGGNet and GoogleNet. Also, the residual blocks affected the performance of the trained models based on the ResNets. Therefore, in this study, we demonstrated that the object detection networks with the small number of layers are suitable for detecting pneumonia using chest X-ray images. And, the trained models based on the ResNets can be optimized by applying appropriate residual-blocks.

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Performance Analysis of MixMatch-Based Semi-Supervised Learning for Defect Detection in Manufacturing Processes (제조 공정 결함 탐지를 위한 MixMatch 기반 준지도학습 성능 분석)

  • Ye-Jun Kim;Ye-Eun Jeong;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.312-320
    • /
    • 2023
  • Recently, there has been an increasing attempt to replace defect detection inspections in the manufacturing industry using deep learning techniques. However, obtaining substantial high-quality labeled data to enhance the performance of deep learning models entails economic and temporal constraints. As a solution for this problem, semi-supervised learning, using a limited amount of labeled data, has been gaining traction. This study assesses the effectiveness of semi-supervised learning in the defect detection process of manufacturing using the MixMatch algorithm. The MixMatch algorithm incorporates three dominant paradigms in the semi-supervised field: Consistency regularization, Entropy minimization, and Generic regularization. The performance of semi-supervised learning based on the MixMatch algorithm was compared with that of supervised learning using defect image data from the metal casting process. For the experiments, the ratio of labeled data was adjusted to 5%, 10%, 25%, and 50% of the total data. At a labeled data ratio of 5%, semi-supervised learning achieved a classification accuracy of 90.19%, outperforming supervised learning by approximately 22%p. At a 10% ratio, it surpassed supervised learning by around 8%p, achieving a 92.89% accuracy. These results demonstrate that semi-supervised learning can achieve significant outcomes even with a very limited amount of labeled data, suggesting its invaluable application in real-world research and industrial settings where labeled data is limited.

Adaptive Data Hiding Techniques for Secure Communication of Images (영상 보안통신을 위한 적응적인 데이터 은닉 기술)

  • 서영호;김수민;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.664-672
    • /
    • 2004
  • Widespread popularity of wireless data communication devices, coupled with the availability of higher bandwidths, has led to an increased user demand for content-rich media such as images and videos. Since such content often tends to be private, sensitive, or paid for, there exists a requirement for securing such communication. However, solutions that rely only on traditional compute-intensive security mechanisms are unsuitable for resource-constrained wireless and embedded devices. In this paper, we propose a selective partial image encryption scheme for image data hiding , which enables highly efficient secure communication of image data to and from resource constrained wireless devices. The encryption scheme is invoked during the image compression process, with the encryption being performed between the quantizer and the entropy coder stages. Three data selection schemes are proposed: subband selection, data bit selection and random selection. We show that these schemes make secure communication of images feasible for constrained embed-ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of ded devices. In addition we demonstrate how these schemes can be dynamically configured to trade-off the amount of data hiding achieved with the computation requirements imposed on the wireless devices. Experiments conducted on over 500 test images reveal that, by using our techniques, the fraction of data to be encrypted with our scheme varies between 0.0244% and 0.39% of the original image size. The peak signal to noise ratios (PSNR) of the encrypted image were observed to vary between about 9.5㏈ to 7.5㏈. In addition, visual test indicate that our schemes are capable of providing a high degree of data hiding with much lower computational costs.

Texture Feature analysis using Computed Tomography Imaging in Fatty Liver Disease Patients (Fatty Liver 환자의 컴퓨터단층촬영 영상을 이용한 질감특징분석)

  • Park, Hyong-Hu;Park, Ji-Koon;Choi, Il-Hong;Kang, Sang-Sik;Noh, Si-Cheol;Jung, Bong-Jae
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.2
    • /
    • pp.81-87
    • /
    • 2016
  • In this study we proposed a texture feature analysis algorithm that distinguishes between a normal image and a diseased image using CT images of some fatty liver patients, and generates both Eigen images and test images which can be applied to the proposed computer aided diagnosis system in order to perform a quantitative analysis for 6 parameters. And through the analysis, we derived and evaluated the recognition rate of CT images of fatty liver. As the results of examining over 30 example CT images of fatty liver, the recognition rates representing a specific texture feature-value are as follows: some appeared to be as high as 100% including Average Gray Level, Entropy 96.67%, Skewness 93.33%, and Smoothness while others showed a little low disease recognition rate: 83.33% for Uniformity 86.67% and for Average Contrast 80%. Consequently, based on this research result, if a software that enables a computer aided diagnosis system for medical images is developed, it will lead to the availability for the automatic detection of a diseased spot in CT images of fatty liver and quantitative analysis. And they can be used as computer aided diagnosis data, resulting in the increased accuracy and the shortened time in the stage of final reading.