• 제목/요약/키워드: Binary images

검색결과 571건 처리시간 0.033초

머신러닝 기법을 활용한 대용량 시계열 데이터 이상 시점탐지 방법론 : 발전기 부품신호 사례 중심 (Anomaly Detection of Big Time Series Data Using Machine Learning)

  • 권세혁
    • 산업경영시스템학회지
    • /
    • 제43권2호
    • /
    • pp.33-38
    • /
    • 2020
  • Anomaly detection of Machine Learning such as PCA anomaly detection and CNN image classification has been focused on cross-sectional data. In this paper, two approaches has been suggested to apply ML techniques for identifying the failure time of big time series data. PCA anomaly detection to identify time rows as normal or abnormal was suggested by converting subjects identification problem to time domain. CNN image classification was suggested to identify the failure time by re-structuring of time series data, which computed the correlation matrix of one minute data and converted to tiff image format. Also, LASSO, one of feature selection methods, was applied to select the most affecting variables which could identify the failure status. For the empirical study, time series data was collected in seconds from a power generator of 214 components for 25 minutes including 20 minutes before the failure time. The failure time was predicted and detected 9 minutes 17 seconds before the failure time by PCA anomaly detection, but was not detected by the combination of LASSO and PCA because the target variable was binary variable which was assigned on the base of the failure time. CNN image classification with the train data of 10 normal status image and 5 failure status images detected just one minute before.

다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정 (2D-3D Pose Estimation using Multi-view Object Co-segmentation)

  • 김성흠;복윤수;권인소
    • 로봇학회논문지
    • /
    • 제12권1호
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

Stochastic Non-linear Hashing for Near-Duplicate Video Retrieval using Deep Feature applicable to Large-scale Datasets

  • Byun, Sung-Woo;Lee, Seok-Pil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.4300-4314
    • /
    • 2019
  • With the development of video-related applications, media content has increased dramatically through applications. There is a substantial amount of near-duplicate videos (NDVs) among Internet videos, thus NDVR is important for eliminating near-duplicates from web video searches. This paper proposes a novel NDVR system that supports large-scale retrieval and contributes to the efficient and accurate retrieval performance. For this, we extracted keyframes from each video at regular intervals and then extracted both commonly used features (LBP and HSV) and new image features from each keyframe. A recent study introduced a new image feature that can provide more robust information than existing features even if there are geometric changes to and complex editing of images. We convert a vector set that consists of the extracted features to binary code through a set of hash functions so that the similarity comparison can be more efficient as similar videos are more likely to map into the same buckets. Lastly, we calculate similarity to search for NDVs; we examine the effectiveness of the NDVR system and compare this against previous NDVR systems using the public video collections CC_WEB_VIDEO. The proposed NDVR system's performance is very promising compared to previous NDVR systems.

히스토그램 기반 오츠 이진화 및 퍼지 이진화 방법과 홉필드 네트워크를 이용한 손상된 이진 영상 복원 (Reconstruction of Damaging Binary Images using Histogram based Otsu and Fuzzy Binaarization and Hopfield Network)

  • 강경민;정영훈;서지연;김광백
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2016년도 추계학술대회
    • /
    • pp.626-628
    • /
    • 2016
  • 본 논문에서는 이진 영상에서 일부 정보가 손실된 경우에 히스토그램을 분석하여 구간을 분할한 후, 오츠 이진화와 퍼지 이진화 기법을 적용하여 원 영상을 이진화 한 후에 홉필드 네트워크를 적용하여 영상을 복원하는 방법을 제안한다. 제안된 방법은 그레이 영상에서 히스토그램을 분석하여 픽셀 값의 변화의 폭이 큰 부분들을 분석하여 구간들을 분할하고 변화의 폭이 큰 부분의 지점에 속하는 영역은 오츠 이진화 기법을 적용하여 이진화하고 그 외의 구간들은 퍼지 이진화 기법을 적용하여 영상을 이진화 한다. 그리고 이진화 된 영상을 홉필드 네트워크를 적용하여 학습한다. 실험 영상에 정보 손실이 발생한 영상을 대상으로 제안된 방법을 적용한 결과, 대부분의 정보 손실이 있는 영상에서 모두 복원되는 것을 확인하였다.

  • PDF

A New Digital Image Steganography Approach Based on The Galois Field GF(pm) Using Graph and Automata

  • Nguyen, Huy Truong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권9호
    • /
    • pp.4788-4813
    • /
    • 2019
  • In this paper, we introduce concepts of optimal and near optimal secret data hiding schemes. We present a new digital image steganography approach based on the Galois field $GF(p^m)$ using graph and automata to design the data hiding scheme of the general form ($k,N,{\lfloor}{\log}_2p^{mn}{\rfloor}$) for binary, gray and palette images with the given assumptions, where k, m, n, N are positive integers and p is prime, show the sufficient conditions for the existence and prove the existence of some optimal and near optimal secret data hiding schemes. These results are derived from the concept of the maximal secret data ratio of embedded bits, the module approach and the fastest optimal parity assignment method proposed by Huy et al. in 2011 and 2013. An application of the schemes to the process of hiding a finite sequence of secret data in an image is also considered. Security analyses and experimental results confirm that our approach can create steganographic schemes which achieve high efficiency in embedding capacity, visual quality, speed as well as security, which are key properties of steganography.

Adaptive Attention Annotation Model: Optimizing the Prediction Path through Dependency Fusion

  • Wang, Fangxin;Liu, Jie;Zhang, Shuwu;Zhang, Guixuan;Zheng, Yang;Li, Xiaoqian;Liang, Wei;Li, Yuejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권9호
    • /
    • pp.4665-4683
    • /
    • 2019
  • Previous methods build image annotation model by leveraging three basic dependencies: relations between image and label (image/label), between images (image/image) and between labels (label/label). Even though plenty of researches show that multiple dependencies can work jointly to improve annotation performance, different dependencies actually do not "work jointly" in their diagram, whose performance is largely depending on the result predicted by image/label section. To address this problem, we propose the adaptive attention annotation model (AAAM) to associate these dependencies with the prediction path, which is composed of a series of labels (tags) in the order they are detected. In particular, we optimize the prediction path by detecting the relevant labels from the easy-to-detect to the hard-to-detect, which are found using Binary Cross-Entropy (BCE) and Triplet Margin (TM) losses, respectively. Besides, in order to capture the inforamtion of each label, instead of explicitly extracting regional featutres, we propose the self-attention machanism to implicitly enhance the relevant region and restrain those irrelevant. To validate the effective of the model, we conduct experiments on three well-known public datasets, COCO 2014, IAPR TC-12 and NUSWIDE, and achieve better performance than the state-of-the-art methods.

Ship Monitoring around the Ieodo Ocean Research Station Using FMCW Radar and AIS: November 23-30, 2013

  • Kim, Tae-Ho;Yang, Chan-Su
    • 대한원격탐사학회지
    • /
    • 제38권1호
    • /
    • pp.45-56
    • /
    • 2022
  • The Ieodo Ocean Research Station (IORS) lies between the exclusive economic zone (EEZ) boundaries of Korea, Japan, and China. The geographical positioning of the IORS makes it ideal for monitoring ships in the area. In this study, we introduce ship monitoring results by Automatic Identification System (AIS) and the Broadband 3GTM radar, which has been developed for use in small ships using the Frequency Modulated Continuous Wave (FMCW) technique. AIS and FMCW radar data were collected at IORS from November 23th to 30th, 2013. The acquired FMCW radar data was converted to 2-D binary image format over pre-processing, including the internal and external noise filtering. The ship positions detected by FMCW radar images were passed into a tracking algorithm. We then compared the detection and tracking results from FMCW radar with AIS information and found that they were relatively well matched. Tracking performance is especially good when ships are across from each other. The results also show good monitoring capability for small fishing ships, even those not equipped with AIS or with a dysfunctional AIS.

Blind Quality Metric via Measurement of Contrast, Texture, and Colour in Night-Time Scenario

  • Xiao, Shuyan;Tao, Weige;Wang, Yu;Jiang, Ye;Qian, Minqian.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.4043-4064
    • /
    • 2021
  • Night-time image quality evaluation is an urgent requirement in visual inspection. The lighting environment of night-time results in low brightness, low contrast, loss of detailed information, and colour dissonance of image, which remains a daunting task of delicately evaluating the image quality at night. A new blind quality assessment metric is presented for realistic night-time scenario through a comprehensive consideration of contrast, texture, and colour in this article. To be specific, image blocks' color-gray-difference (CGD) histogram that represents contrast features is computed at first. Next, texture features that are measured by the mean subtracted contrast normalized (MSCN)-weighted local binary pattern (LBP) histogram are calculated. Then statistical features in Lαβ colour space are detected. Finally, the quality prediction model is conducted by the support vector regression (SVR) based on extracted contrast, texture, and colour features. Experiments conducted on NNID, CCRIQ, LIVE-CH, and CID2013 databases indicate that the proposed metric is superior to the compared BIQA metrics.

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.III
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

1D 통합된 근접차이에 기반한 자율적인 다중분광 영상 분할 (Unsupervised Multispectral Image Segmentation Based on 1D Combined Neighborhood Differences)

  • 뮤잠멜;윤병춘;김덕환
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2010년도 추계학술발표대회
    • /
    • pp.625-628
    • /
    • 2010
  • This paper proposes a novel feature extraction method for unsupervised multispectral image segmentation based in one dimensional combined neighborhood differences (1D CND). In contrast with the original CND, which is applied with traditional image, 1D CND is computed on a single pixel with various bands. The proposed algorithm utilizes the sign of differences between bands of the pixel. The difference values are thresholded to form a binary codeword. A binomial factor is assigned to these codeword to form another unique value. These values are then grouped to construct the 1D CND feature image where is used in the unsupervised image segmentation. Various experiments using two LANDSAT multispectral images have been performed to evaluate the segmentation and classification accuracy of the proposed method. The result shows that 1D CND feature outperforms the spectral feature, with average classification accuracy of 87.55% whereas that of spectral feature is 55.81%.