• Title/Summary/Keyword: Similar information retrieval

Search Result 297, Processing Time 0.032 seconds

Retrieval and Validation of Precipitable Water Vapor using GPS Datasets of Mobile Observation Vehicle on the Eastern Coast of Korea

  • Kim, Yoo-Jun;Kim, Seon-Jeong;Kim, Geon-Tae;Choi, Byoung-Choel;Shim, Jae-Kwan;Kim, Byung-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.4
    • /
    • pp.365-382
    • /
    • 2016
  • The results from the Global Positioning System (GPS) measurements of the Mobile Observation Vehicle (MOVE) on the eastern coast of Korea have been compared with REFerence (REF) values from the fixed GPS sites to assess the performance of Precipitable Water Vapor (PWV) retrievals in a kinematic environment. MOVE-PWV retrievals had comparatively similar trends and fairly good agreements with REF-PWV with a Root-Mean-Square Error (RMSE) of 7.4 mm and $R^2$ of 0.61, indicating statistical significance with a p-value of 0.01. PWV retrievals from the June cases showed better agreement than those of the other month cases, with a mean bias of 2.1 mm and RMSE of 3.8 mm. We further investigated the relationships of the determinant factors of GPS signals with the PWV retrievals for detailed error analysis. As a result, both MultiPath (MP) errors of L1 and L2 pseudo-range had the best indices for the June cases, 0.75-0.99 m. We also found that both Position Dilution Of Precision (PDOP) and Signal to Noise Ratio (SNR) values in the June cases were better than those in other cases. That is, the analytical results of the key factors such as MP errors, PDOP, and SNR that can affect GPS signals should be considered for obtaining more stable performance. The data of MOVE can be used to provide water vapor information with high spatial and temporal resolutions in the case of dramatic changes of severe weather such as those frequently occurring in the Korean Peninsula.

Improvement of Cloud-data Filtering Method Using Spectrum of AERI (AERI 스펙트럼 분석을 통한 구름에 영향을 받은 스펙트럼 자료 제거 방법 개선)

  • Cho, Joon-Sik;Goo, Tae-Young;Shin, Jinho
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.137-148
    • /
    • 2015
  • The National Institute of Meteorological Research (NIMR) has operated the Fourier Transform InfraRed (FTIR) spectrometer which is the Atmospheric Emitted Radiance Interferometer (AERI) in Anmyeon island, Korea since June 2010. The ground-based AERI with similar hyper-spectral infrared sensor to satellite could be an alternative way to validate satellite-based remote sensing. In this regard, the NIMR has focused on the improvement of retrieval quality from the AERI, particularly cloud-data filtering method. The AERI spectrum which is measured on a typical clear day is selected reference spectrum and we used region of atmospheric window. We performed test of threshold in order to select valid threshold. We retrieved methane using new method which is used reference spectrum, and the other method which is used KLAPS cloud cover information, each retrieved methane was compared with that of ground-based in-situ measurements. The quality of AERI methane retrievals of new method was significantly more improved than method of used KLAPS. In addition, the comparison of vertical total column of methane from AERI and GOSAT shows good result.

SOSiM: Shape-based Object Similarity Matching using Shape Feature Descriptors (SOSiM: 형태 특징 기술자를 사용한 형태 기반 객체 유사성 매칭)

  • Noh, Chung-Ho;Lee, Seok-Lyong;Chung, Chin-Wan;Kim, Sang-Hee;Kim, Deok-Hwan
    • Journal of KIISE:Databases
    • /
    • v.36 no.2
    • /
    • pp.73-83
    • /
    • 2009
  • In this paper we propose an object similarity matching method based on shape characteristics of an object in an image. The proposed method extracts edge points from edges of objects and generates a log polar histogram with respect to each edge point to represent the relative placement of extracted points. It performs the matching in such a way that it compares polar histograms of two edge points sequentially along with edges of objects, and uses a well-known k-NN(nearest neighbor) approach to retrieve similar objects from a database. To verify the proposed method, we've compared it to an existing Shape-Context method. Experimental results reveal that our method is more accurate in object matching than the existing method, showing that when k=5, the precision of our method is 0.75-0.90 while that of the existing one is 0.37, and when k=10, the precision of our method is 0.61-0.80 while that of the existing one is 0.31. In the experiment of rotational transformation, our method is also more robust compared to the existing one, showing that the precision of our method is 0.69 while that of the existing one is 0.30.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Retrieval of High Resolution Surface Net Radiation for Urban Area Using Satellite and CFD Model Data Fusion (위성 및 CFD모델 자료의 융합을 통한 도시지역에서의 고해상도 지표 순복사 산출)

  • Kim, Honghee;Lee, Darae;Choi, Sungwon;Jin, Donghyun;Her, Morang;Kim, Jajin;Hong, Jinkyu;Hong, Je-Woo;Lee, Keunmin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_1
    • /
    • pp.295-300
    • /
    • 2018
  • Net radiation is the total amount of radiation energy used as a heat flux for the Earth's energy cycle, and net radiation from the surface is an important factor in areas such as hydrology, climate, meteorological studies and agriculture. It is very important to monitoring the net radiation through remote sensing to be able to understand the trend of heat island and urbanization phenomenon. However, net radiation estimation using only remote sensing data is generally causes difference in accuracy depending on cloud. Therefore, in this paper, we retrieved and monitored high resolution surface net radiation at 1 hour interval in Eunpyeong New Town where urbanization using Communication, Ocean and Meteorological Satellite (COMS), Landsat-8 satellite and Computational Fluid Dynamics (CFD) model data reflecting the difference in building height. We compared the observed and estimated net radiation at the flux tower. As a result, estimated net radiation was similar trend to the observed net radiation as a whole and it had the accuracy of RMSE $54.29Wm^{-2}$ and Bias $27.42Wm^{-2}$. In addition, the calculated net radiation showed well the meteorological conditions such as precipitation, and showed the characteristics of net radiation for the vegetation and artificial area in the spatial distribution.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.