• Title/Summary/Keyword: Local feature

Search Result 934, Processing Time 0.03 seconds

"Dust, Ice, and Gas In Time" (DIGIT) Herschel Observations of GSS30-IRS1 in Ophiuchus

  • Je, Hyerin;Lee, Jeong-Eun;Green, Joel D.;Evans, Neal J. II
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.63.2-63.2
    • /
    • 2014
  • As a part of the "Dust, Ice, and Gas In Time" (DIGIT) key program on Herschel, we observed GSS30-IRS1, a Class I protostar located in Ophiuchus (d =125 pc), with Herschel/Photodetector Array Camera and Spectrometer (PACS). More than 70 lines were detected within a wavelength range from 50 ${\mu}m$ to 200 ${\mu}m$: CO lines from J = 14-13 to 41-40, several $H_2O$ lines of Eup = 100 K to 1500 K, 16 transitions of OH rotational lines, and two atomic [O I] lines at 63 and 145 ${\mu}m$. The [C II] line, known as a tracer of externally heated gas by the interstellar radiation field, is also detected at 158 ${\mu}m$. All lines, except [O I] and [C II], are detected only at the central spaxel of $9^{\prime\prime}.4{\times}9^{\prime\prime}.4$. The [O I] emission is extended along a NE-SW orientation, which is consistent with the known outflow direction, while the [C II] line is detected over all spaxels. One possible explanation of the detection of the [C II] line and no correlation of its spatial distribution with any other molecular emission is the existence of the enhanced ISRF nearby GSS30-IRS1. One interesting feature of GSS30-IRS1 is that the continuum emission is extended beyond the point-spread function (PSF), unlike the molecular line emission, indicative of significant external heating. The best-fit continuum model of GSS30-IRS1 with the physical structure including flared disk, envelope, and outflow shows that the internal luminosity is 11 $L_{\odot}$, and the region is also externally heated by a radiation field enhanced by a factor of 25 compared to the local standard interstellar field.

  • PDF

Texture Image Database Retrieval Using JPEG-2000 Partial Entropy Decoding (JPEG-2000 부분 엔트로피 복호화에 의향 질감 영상 데이터베이스 검색)

  • Park, Ha-Joong;Jung, Ho-Youl
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.5C
    • /
    • pp.496-512
    • /
    • 2007
  • In this paper, we propose a novel JPEG-2000 compressed image retrieval system using feature vector extracted through partial entropy decoding. Main idea of the proposed method is to utilize the context information that is generated during entropy encoding/decoding. In the framework of JPEG-2000, the context of a current coefficient is determined depending on the pattern of the significance and/or the sign of its neighbors in three bit-plane coding passes and four coding modes. The contexts provide a model for estimating the probability of each symbol to be coded. And they can efficiently describe texture images which have different pattern because they represent the local property of images. In addition, our system can directly search the images in the JPEG-2000 compressed domain without full decompression. Therefore, our proposed scheme can accelerate the work of retrieving images. We create various distortion and similarity image databases using MIT VisTex texture images for simulation. we evaluate the proposed algorithm comparing with the previous ones. Through simulations, we demonstrate that our method achieves good performance in terms of the retrieval accuracy as well as the computational complexity.

License Plate Detection with Improved Adaboost Learning based on Newton's Optimization and MCT (뉴턴 최적화를 통해 개선된 아다부스트 훈련과 MCT 특징을 이용한 번호판 검출)

  • Lee, Young-Hyun;Kim, Dae-Hun;Ko, Han-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.71-82
    • /
    • 2012
  • In this paper, we propose a license plate detection method with improved Adaboost learning and MCT (Modified Census Transform). The MCT represents the local structure patterns as integer numbered feature values which has robustness to illumination change and memory efficiency. However, since these integer values are discrete, a lookup table is needed to design a weak classifier for Adaboost learning. Some previous research efforts have focused on minimization of exponential criterion for Adaboost optimization. In this paper, a method that uses MCT and improved Adaboost learning based on Newton's optimization to exponential criterion is proposed for license plate detection. Experimental results on license patch images and field images demonstrate that the proposed method yields higher performance of detection rates with low false positives than the conventional method using the original Adaboost learning.

A Back-Pressure Algorithm for Lifetime Extension of the Wireless Sensor Networks with Multi-Level Energy Thresholds (센서네트워크 수명 연장을 위한 에너지 임계값 기반 다단계 Back-Pressure 알고리즘)

  • Jeong, Dae-In
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.12B
    • /
    • pp.1083-1096
    • /
    • 2008
  • This paper proposes an energy-aware path management scheme, so-called the TBP(Threshold based Back-Pressure) algorithm, which is designed for lifetime extension of the energy-constrained wireless sensor networks. With the goal of fair energy consumptions, we extensively utilize the available paths between the source and the sink nodes. The traffic distribution feature of the TBP algorithm operates in two scales; the local and the whole routing area. The threshold and the back-pressure signal are introduced for implementing those operations. It is noticeable that the TBP algorithm maintains the scalability by defining both the threshold and the back-pressure signal to have their meanings locally confined to one hop only. Throughout several experiments, we observe that the TBP algorithm enhances the network-wide energy distribution. which implies the extension of the network lifetime. Additionally, both the delay and the throughput outcomes show remarkable improvements. This shows that the energy-aware path control scheme holds the effects of the congestion control.

Automatic Object Recognition in 3D Measuring Data (3차원 측정점으로부터의 객체 자동인식)

  • Ahn, Sung-Joon
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.47-54
    • /
    • 2009
  • Automatic object recognition in 3D measuring data is of great interest in many application fields e.g. computer vision, reverse engineering and digital factory. In this paper we present a software tool for a fully automatic object detection and parameter estimation in unordered and noisy point clouds with a large number of data points. The software consists of three interactive modules each for model selection, point segmentation and model fitting, in which the orthogonal distance fitting (ODF) plays an important role. The ODF algorithms estimate model parameters by minimizing the square sum of the shortest distances between model feature and measurement points. The local quadric surface fitted through ODF to a randomly touched small initial patch of the point cloud provides the necessary initial information for the overall procedures of model selection, point segmentation and model fitting. The performance of the presented software tool will be demonstrated by applying to point clouds.

Analysis on the Characteristics of Urban Decline Using GIS and Spatial Statistical Method : The Case of Gwangju Metropolitan City (GIS와 공간통계기법을 활용한 도시쇠퇴 특성 분석 - 광주광역시를 중심으로 -)

  • Jang, Mun-Hyun
    • Journal of the Korean association of regional geographers
    • /
    • v.22 no.2
    • /
    • pp.424-438
    • /
    • 2016
  • In an effort to prevent urban decline and hollowing-out phenomenon and to vitalize stagnant local economy, a new urban regeneration paradigm is on the rise. This study aims to analyze urban decline characteristics using the spatial statistical method and GIS on the basis of decline standards in the Urban Regeneration Special Act, and spatial autocorrelation technique. The Gwangju Metropolitan City was set as a research target, and the decline standards in the Urban Regeneration Special Act - population reduction, business declines, and outworn buildings - were applied as the indicator to secure the objectivity. In particular, this study has a distinctive feature from the other existing ones, as applying GIS and the spatial statistical technique, in a sense to make urban decline characteristics analysis by the spatial autocorrelation technique. The overall analysis procedure was carried out by applying the standards of designating urban regeneration regions, and following the spatial exploratory procedure step by step. Therefore, the spatial statistical method procedure and the urban decline characteristics analysis data being presented in this study, as the results, are expected to contribute to the urban decline diagnosis at the level of metropolitan city, as well as to provide useful information for spatial decision making in accordance with urban regeneration.

  • PDF

Development of Computer Vision System for Individual Recognition and Feature Information of Cow (I) - Individual recognition using the speckle pattern of cow - (젖소의 개체인식 및 형상 정보화를 위한 컴퓨터 시각 시스템 개발 (I) - 반문에 의한 개체인식 -)

  • 이종환
    • Journal of Biosystems Engineering
    • /
    • v.27 no.2
    • /
    • pp.151-160
    • /
    • 2002
  • Cow image processing technique would be useful not only for recognizing an individual but also for establishing the image database and analyzing the shape of cows. A cow (Holstein) has usually the unique speckle pattern. In this study, the individual recognition of cow was carried out using the speckle pattern and the content-based image retrieval technique. Sixty cow images of 16 heads were captured under outdoor illumination, which were complicated images due to shadow, obstacles and walking posture of cow. Sixteen images were selected as the reference image for each cow and 44 query images were used for evaluating the efficiency of individual recognition by matching to each reference image. Run-lengths and positions of runs across speckle area were calculated from 40 horizontal line profiles for ROI (region of interest) in a cow body image after 3 passes of 5$\times$5 median filtering. A similarity measure for recognizing cow individuals was calculated using Euclidean distance of normalized G-frame histogram (GH). normalized speckle run-length (BRL), normalized x and y positions (BRX, BRY) of speckle runs. This study evaluated the efficiency of individual recognition of cow using Recall(Success rate) and AVRR(Average rank of relevant images). Success rate of individual recognition was 100% when GH, BRL, BRX and BRY were used as image query indices. It was concluded that the histogram as global property and the information of speckle runs as local properties were good image features for individual recognition and the developed system of individual recognition was reliable.

A Study on Local Filtering of Signal in Wavelet Plane (웨이브렛 평면에서 신호의 국부 필터링에 관한 연구)

  • Bae Sang-Bum;Kim Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.477-480
    • /
    • 2006
  • To represent the accurate feature of signal and system, many researches have been done in many fields of basic and engineering science which led a great development of modem society. Even until currently, in order to acquire useful information from signals at high speed, many methods and transforms have been processed. In these methods, the Fourier transform which represents signal as the combination of the frequency component has been applied to the most fields. But as transform not to consider time information, the Fourier transform does not provide time information of the time and presents only overall features of signals. The wavelet transform, which is proposed to overcome this problem and recently expands the range of the application, presents time-frequency localization and many kinds of the wavelet can be applied according to the environment of application. In this paper, we detect the features of signals using the function which is considered as the wavelet and do research for filtering locally in the wavelet plane.

  • PDF

Detecting Salient Regions based on Bottom-up Human Visual Attention Characteristic (인간의 상향식 시각적 주의 특성에 바탕을 둔 현저한 영역 탐지)

  • 최경주;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.189-202
    • /
    • 2004
  • In this paper, we propose a new salient region detection method in an image. The algorithm is based on the characteristics of human's bottom-up visual attention. Several features known to influence human visual attention like color, intensity and etc. are extracted from the each regions of an image. These features are then converted to importance values for each region using its local competition function and are combined to produce a saliency map, which represents the saliency at every location in the image by a scalar quantity, and guides the selection of attended locations, based on the spatial distribution of saliency region of the image in relation to its Perceptual importance. Results shown indicate that the calculated Saliency Maps correlate well with human perception of visually important regions.

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.