• Title/Summary/Keyword: retrieve

Search Result 1,079, Processing Time 0.026 seconds

Investigation of SO2 Effect on TOMS O3 Retrieval from OMI Measurement in China (OMI 위성센서를 이용한 중국 지역에서 TOMS 오존 산출에 대한 이산화황의 영향 조사 연구)

  • Choi, Wonei;Hong, Hyunkee;Kim, Daewon;Ryu, Jae-Yong;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.629-637
    • /
    • 2016
  • In this present study, we identified the $SO_2$ effect on $O_3$ retrieval from the Ozone Monitoring Instrument (OMI) measurement over Chinese Industrial region from 2005 through 2007. The Planetary boundary layer (PBL) $SO_2$ data measured by OMI sensor is used in this present study. OMI-Total Ozone Mapping Spectrometer (TOMS) total $O_3$ is compared with OMI-Differential Optical Absorption Spectrometer (DOAS) total $O_3$ in various $SO_2$ condition in PBL. The difference between OMI-TOMS and OMI-DOAS total $O_3$ (T-D) shows dependency on $SO_2$ (R (Correlation coefficient) = 0.36). Since aerosol has been reported to cause uncertainty of both OMI-TOMS and OMI-DOAS total $O_3$ retrieval, the aerosol effect on relationship between PBL $SO_2$ and T-D is investigated with changing Aerosol Optical Depth (AOD). There is negligible aerosol effect on the relationship showing similar slope ($1.83{\leq}slope{\leq}2.36$) between PBL $SO_2$ and T-D in various AOD conditions. We also found that the rate of change in T-D per 1.0 DU change in PBL, middle troposphere (TRM), and upper troposphere and stratosphere (STL) are 1.6 DU, 3.9 DU and 4.9 DU, respectively. It shows that the altitude where $SO_2$ exist can affect the value of T-D, which could be due to reduced absolute radiance sensitivity in the boundary layer at 317.5 nm which is used to retrieve OMI-TOMS ozone in boundary layer.

Methodology for Issue-related R&D Keywords Packaging Using Text Mining (텍스트 마이닝 기반의 이슈 관련 R&D 키워드 패키징 방법론)

  • Hyun, Yoonjin;Shun, William Wong Xiu;Kim, Namgyu
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.57-66
    • /
    • 2015
  • Considerable research efforts are being directed towards analyzing unstructured data such as text files and log files using commercial and noncommercial analytical tools. In particular, researchers are trying to extract meaningful knowledge through text mining in not only business but also many other areas such as politics, economics, and cultural studies. For instance, several studies have examined national pending issues by analyzing large volumes of text on various social issues. However, it is difficult to provide successful information services that can identify R&D documents on specific national pending issues. While users may specify certain keywords relating to national pending issues, they usually fail to retrieve appropriate R&D information primarily due to discrepancies between these terms and the corresponding terms actually used in the R&D documents. Thus, we need an intermediate logic to overcome these discrepancies, also to identify and package appropriate R&D information on specific national pending issues. To address this requirement, three methodologies are proposed in this study-a hybrid methodology for extracting and integrating keywords pertaining to national pending issues, a methodology for packaging R&D information that corresponds to national pending issues, and a methodology for constructing an associative issue network based on relevant R&D information. Data analysis techniques such as text mining, social network analysis, and association rules mining are utilized for establishing these methodologies. As the experiment result, the keyword enhancement rate by the proposed integration methodology reveals to be about 42.8%. For the second objective, three key analyses were conducted and a number of association rules between national pending issue keywords and R&D keywords were derived. The experiment regarding to the third objective, which is issue clustering based on R&D keywords is still in progress and expected to give tangible results in the future.

Development of JPEG2000 Viewer for Mobile Image System (이동형 의료영상 장치를 위한 JPEG2000 영상 뷰어 개발)

  • 김새롬;정해조;강원석;이재훈;이상호;신성범;유선국;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.2
    • /
    • pp.124-130
    • /
    • 2003
  • Currently, as a consequence of PACS (Picture Archiving Communication System) implementation many hospitals are replacing conventional film-type interpretations of diagnostic medical images with new digital-format interpretations that can also be saved, and retrieve However, the big limitation in PACS is considered to be the lack of mobility. The purpose of this study is to determine the optimal communication packet size. This was done by considering the terms occurred in the wireless communication. After encoding medical image using JPGE2000 image compression method, This method embodied auto-error correction technique preventing the loss of packets occurred during wireless communication. A PC class server, with capabilities to load, collect data, save images, and connect with other network, was installed. Image data were compressed using JPEG2000 algorithm which supports the capability of high energy density and compression ratio, to communicate through a wireless network. Image data were also transmitted in block units coeded by JPEG2000 to prevent the loss of the packets in a wireless network. When JPGE2000 image data were decoded in a PUA (Personal Digital Assistant), it was instantaneous for a MR (Magnetic Resonance) head image of 256${\times}$256 pixels, while it took approximately 5 seconds to decode a CR (Computed Radiography) chest image of 800${\times}$790 pixels. In the transmission of the image data using a CDMA 1X module (Code-Division Multiple Access 1st Generation), 256 byte/sec was considered a stable transmission rate, but packets were lost in the intervals at the transmission rate of 1Kbyte/sec. However, even with a transmission rate above 1 Kbyte/sec, packets were not lost in wireless LAN. Current PACS are not compatible with wireless networks. because it does not have an interface between wired and wireless. Thus, the mobile JPEG2000 image viewing system was developed in order to complement mobility-a limitation in PACS. Moreover, the weak-connections of the wireless network was enhanced by re-transmitting image data within a limitations The results of this study are expected to play an interface role between the current wired-networks PACS and the mobile devices.

  • PDF

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Estimation of Precipitable Water from the GMS-5 Split Window Data (GMS-5 Split Window 자료를 이용한 가강수량 산출)

  • 손승희;정효상;김금란;이정환
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.1
    • /
    • pp.53-68
    • /
    • 1998
  • Observation of hydrometeors' behavior in the atmosphere is important to understand weather and climate. By conventional observations, we can get the distribution of water vapor at limited number of points on the earth. In this study, the precipitable water has been estimated from the split window channel data on GMS-5 based upon the technique developed by Chesters et al.(1983). To retrieve the precipitable water, water vapor absorption parameter depending on filter function of sensor has been derived using the regression analysis between the split window channel data and the radiosonde data observed at Osan, Pohang, Kwangiu and Cheju staions for 4 months. The air temperature of 700 hPa from the Global Spectral Model of Korea Meteorological Administration (GSM/KMA) has been used as mean air temperature for single layer radiation model. The retrieved precipitable water for the period from August 1996 through December 1996 are compared to radiosonde data. It is shown that the root mean square differences between radiosonde observations and the GMS-5 retrievals range from 0.65 g/$cm^2$ to 1.09 g/$cm^2$ with correlation coefficient of 0.46 on hourly basis. The monthly distribution of precipitable water from GMS-5 shows almost good representation in large scale. Precipitable water is produced 4 times a day at Korea Meteorological Administration in the form of grid point data with 0.5 degree lat./lon. resolution. The data can be used in the objective analysis for numerical weather prediction and to increase the accuracy of humidity analysis especially under clear sky condition. And also, the data is a useful complement to existing data set for climatological research. But it is necessary to get higher correlation between radiosonde observations and the GMS-5 retrievals for operational applications.

Research on the Re-organization of the Administration of Labor's Records in the custody of the National Archives (노동청 기록의 재조직에 관한 연구 - 국가기록원 소장 기록을 중심으로 -)

  • Kwak, Kun-Hong
    • The Korean Journal of Archival Studies
    • /
    • no.23
    • /
    • pp.141-178
    • /
    • 2010
  • The Administration of Labor was responsible for the technical and practical functions like policy-making of labor matters and implementing the relevant laws. However, there has been a few record transferred to the National Archives to help find out the labor policy-making process. This is one of the typical examples that shows the discontinuity and unbalance, and disorderly filing of the administrative records in Korea. Naturally it is almost impossible to retrieve the appropriate content through the records file-name. Users should be at the trouble to compare the record items and their content one by one. For the re-organization of the Administration of Labor' records, this research suggests the four-level analysis of functions of the Administration. The Administration of Labor' survived records could be linked to each level function. And the publication of the 'Records Abstract Catalog' providing users with more information about the records would pave the way for easier access to the records. In addition, it also suggests the logical re-filing of the survived records of which we cannot find the order or sequence. This re-organization of the survived records would help to establish the acquisition and appraisal policy of the labor records as well as the new way of description and finding tool hereafter. Drawing up labor history map is a starting point for the acquisition strategy of the labor records, which could allow users to gain systematic access on the survived records. Of course, extensive investigation and research on the survived records is a prerequisite for the map. It would be required to research on the survived records of the other government agencies, including economic-social area ministries and investigation agencies and the National Assembly as well. It is also needed to arrange and typify the significant incidents and activities on thematic and periodic frames in the labor history. If possible to understand or connect the survived records and these accomplishments comprehensively, it would be of great help for the acquisition of the labor records and the related oral records projects.

Detection of Surface Changes by the 6th North Korea Nuclear Test Using High-resolution Satellite Imagery (고해상도 위성영상을 활용한 북한 6차 핵실험 이후 지표변화 관측)

  • Lee, Won-Jin;Sun, Jongsun;Jung, Hyung-Sup;Park, Sun-Cheon;Lee, Duk Kee;Oh, Kwan-Young
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_4
    • /
    • pp.1479-1488
    • /
    • 2018
  • On September 3rd 2017, strong artificial seismic signals from North Korea were detected in KMA (Korea Meteorological Administration) seismic network. The location of the epicenter was estimated to be Punggye-ri nuclear test site and it was the most powerful to date. The event was not studied well due to accessibility and geodetic measurements. Therefore, we used remote sensing data to analyze surface changes around Mt. Mantap area. First of all, we tried to detect surface deformation using InSAR method with Advanced Land Observation Satellite-2 (ALOS-2). Even though ALOS-2 data used L-band long wavelength, it was not working well for this particular case because of decorrelation on interferogram. The main reason would be large deformation near the Mt. Mantap area. To overcome this limitation of decorrelation, we applied offset tracking method to measure deformation. However, this method is affected by window kernel size. So we applied various window sizes from 32 to 224 in 16 steps. We could retrieve 2D surface deformation of about 3 m in maximum in the west side of Mt. Mantap. Second, we used Pleiadas-A/B high resolution satellite optical images which were acquired before and after the 6th nuclear test. We detected widespread surface damage around the top of Mt. Mantap such as landslide and suspected collapse area. This phenomenon may be caused by a very strong underground nuclear explosion test. High-resolution satellite images could be used to analyze non-accessible area.

Impact of Lambertian Cloud Top Pressure Error on Ozone Profile Retrieval Using OMI (램버시안 구름 모델의 운정기압 오차가 OMI 오존 프로파일 산출에 미치는 영향)

  • Nam, Hyeonshik;Kim, Jae Hawn;Shin, Daegeun;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.347-358
    • /
    • 2019
  • Lambertian cloud model (Lambertian Cloud Model) is the simplified cloud model which is used to effectively retrieve the vertical ozone distribution of the atmosphere where the clouds exist. By using the Lambertian cloud model, the optical characteristics of clouds required for radiative transfer simulation are parametrized by Optical Centroid Cloud Pressure (OCCP) and Effective Cloud Fraction (ECF), and the accuracy of each parameter greatly affects the radiation simulation accuracy. However, it is very difficult to generalize the vertical ozone error due to the OCCP error because it varies depending on the radiation environment and algorithm setting. In addition, it is also difficult to analyze the effect of OCCP error because it is mixed with other errors that occur in the vertical ozone calculation process. This study analyzed the ozone retrieval error due to OCCP error using two methods. First, we simulated the impact of OCCP error on ozone retrieval based on Optimal Estimation. Using LIDORT radiation model, the radiation error due to the OCCP error is calculated. In order to convert the radiation error to the ozone calculation error, the radiation error is assigned to the conversion equation of the optimal estimation method. The results show that when the OCCP error occurs by 100 hPa, the total ozone is overestimated by 2.7%. Second, a case analysis is carried out to find the ozone retrieval error due to OCCP error. For the case analysis, the ozone retrieval error is simulated assuming OCCP error and compared with the ozone error in the case of PROFOZ 2005-2006, an OMI ozone profile product. In order to define the ozone error in the case, we assumed an ideal assumption. Considering albedo, and the horizontal change of ozone for satisfying the assumption, the 49 cases are selected. As a result, 27 out of 49 cases(about 55%)showed a correlation of 0.5 or more. This result show that the error of OCCP has a significant influence on the accuracy of ozone profile calculation.

Development of a Retrieval Algorithm for Adjustment of Satellite-viewed Cloudiness (위성관측운량 보정을 위한 알고리즘의 개발)

  • Son, Jiyoung;Lee, Yoon-Kyoung;Choi, Yong-Sang;Ok, Jung;Kim, Hye-Sil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.415-431
    • /
    • 2019
  • The satellite-viewed cloudiness, a ratio of cloudy pixels to total pixels ($C_{sat,\;prev}$), inevitably differs from the "ground-viewed" cloudiness ($C_{grd}$) due to different viewpoints. Here we develop an algorithm to retrieve the satellite-viewed, but adjusted cloudiness to $C_{grd} (C_{sat,\;adj})$. The key process of the algorithm is to convert the cloudiness projected on the plane surface into the cloudiness on the celestial hemisphere from the observer. For this conversion, the supplementary satellite retrievals such as cloud detection and cloud top pressure are used as they provide locations of cloudy pixels and cloud base height information, respectively. The algorithm is tested for Himawari-8 level 1B data. The $C_{sat,\;adj}$ and $C_{sat,\;prev}$ are retrieved and validated with $C_{grd}$ of SYNOP station over Korea (22 stations) and China (724 stations) during only daytime for the first seven days of every month from July 2016 to June 2017. As results, the mean error of $C_{sat,\;adj}$ (0.61) is less that than that of $C_{sat,\;prev}$ (1.01). The percent of detection for 'Cloudy' scenario of $C_{sat,\;adj}$ (73%) is higher than that of $C_{sat,\;prev}$ (60%) The percent of correction, the accuracy, of $C_{sat,\;adj}$ is 61%, while that of $C_{sat,\;prev}$ is 55% for all seasons. For the December-January-February period when cloudy pixels are readily overestimated, the proportion of correction of $C_{sat,\;adj$ is 60%, while that of $C_{sat,\;prev}$ is 56%. Therefore, we conclude that the present algorithm can effectively get the satellite cloudiness near to the ground-viewed cloudiness.