• Title/Summary/Keyword: information retrieval.

Search Result 3,681, Processing Time 0.03 seconds

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Odysseus/Parallel-OOSQL: A Parallel Search Engine using the Odysseus DBMS Tightly-Coupled with IR Capability (오디세우스/Parallel-OOSQL: 오디세우스 정보검색용 밀결합 DBMS를 사용한 병렬 정보 검색 엔진)

  • Ryu, Jae-Joon;Whang, Kyu-Young;Lee, Jae-Gil;Kwon, Hyuk-Yoon;Kim, Yi-Reun;Heo, Jun-Suk;Lee, Ki-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.412-429
    • /
    • 2008
  • As the amount of electronic documents increases rapidly with the growth of the Internet, a parallel search engine capable of handling a large number of documents are becoming ever important. To implement a parallel search engine, we need to partition the inverted index and search through the partitioned index in parallel. There are two methods of partitioning the inverted index: 1) document-identifier based partitioning and 2) keyword-identifier based partitioning. However, each method alone has the following drawbacks. The former is convenient in inserting documents and has high throughput, but has poor performance for top h query processing. The latter has good performance for top-k query processing, but is inconvenient in inserting documents and has low throughput. In this paper, we propose a hybrid partitioning method to compensate for the drawback of each method. We design and implement a parallel search engine that supports the hybrid partitioning method using the Odysseus DBMS tightly coupled with information retrieval capability. We first introduce the architecture of the parallel search engine-Odysseus/parallel-OOSQL. We then show the effectiveness of the proposed system through systematic experiments. The experimental results show that the query processing time of the document-identifier based partitioning method is approximately inversely proportional to the number of blocks in the partition of the inverted index. The results also show that the keyword-identifier based partitioning method has good performance in top-k query processing. The proposed parallel search engine can be optimized for performance by customizing the methods of partitioning the inverted index according to the application environment. The Odysseus/parallel OOSQL parallel search engine is capable of indexing, storing, and querying 100 million web documents per node or tens of billions of web documents for the entire system.

The Suitable Region and Site for 'Fuji' Apple Under the Projected Climate in South Korea (미래 시나리오 기후조건하에서의 사과 '후지' 품종 재배적지 탐색)

  • Kim, Soo-Ock;Chung, U-Ran;Kim, Seung-Heui;Choi, In-Myung;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.4
    • /
    • pp.162-173
    • /
    • 2009
  • Information on the expected geographical shift of suitable zones for growing crops under future climate is a starting point of adaptation planning in agriculture and is attracting much concern from policy makers as well as researchers. Few practical schemes have been developed, however, because of the difficulty in implementing the site-selection concept at an analytical level. In this study, we suggest site-selection criteria for quality Fuji apple production and integrate geospatial data and information available in public domains (e.g., digital elevation model, digital soil maps, digital climate maps, and predictive models for agroclimate and fruit quality) to implement this concept on a GIS platform. Primary criterion for selecting sites suitable for Fuji apple production includes land cover, topography, and soil texture. When the primary criterion is satisfied, climatic conditions such as the length of frost free season, freezing risk during the overwintering period, and the late frost risk in spring are tested as the secondary criterion. Finally, the third criterion checks for fruit quality such as color and shape. Land attributes related to these factors in each criterion were implemented in ArcGIS environment as relevant raster layers for spatial analysis, and retrieval procedures were automated by writing programs compatible with ArcGIS. This scheme was applied to the A1B projected climates for South Korea in the future normal years (2011-2040, 2041-2070, and 2071-2100) as well as the current climate condition observed in 1971-2000 for selecting the sites suitable for quality Fuji apple production in each period. Results showed that this scheme can figure out the geographical shift of suitable zones at landscape scales as well as the latitudinal shift of northern limit for cultivation at national or regional scales.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Retrieval of Pollen Optical Depth in the Local Atmosphere by Lidar Observations (라이다를 이용한 지역 대기중 꽃가루의 광학적 두께 산출)

  • Noh, Young-Min;Lee, Han-Lim;Mueller, Detlef;Lee, Kwon-Ho;Choi, Young-Jean;Kim, Kyu-Rang;Choi, Tae-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.1
    • /
    • pp.11-19
    • /
    • 2012
  • Air-borne pollen, biogenically created aerosol particle, influences Earth's radiative balance, visibility impairment, and human health. The importance of pollens has resulted in numerous experimental studies aimed at characterizing their dispersion and transport, as well as health effects. There is, however, limited scientific information concerning the optical properties of airborne pollen particles contributing to total ambient aerosols. In this study, for the first time, optical characteristics of pollen such as aerosol backscattering coefficient, aerosol extinction coefficient, and depolarization ratio at 532 nm and their effect to the atmospheric aerosol were studied by lidar remotes sensing technique. Dual-Lidar observations were carried out at the Gwangju Institute of Science & Technology (GIST) located in Gwagnju, Korea ($35.15^{\circ}E$, $126.53^{\circ}N$) for a spring pollen event from 5 to 7 May 2009. The pollen concentration was measured at the rooftop of Gwangju Bohoon hospital where the building is located 1.0 km apart from lidar site by using Burkard trap sampler. During intensive observation period, high pollen concentration was detected as 1360, 2696, and $1952m^{-3}$ in 5, 6, and 7 May, and increased lidar return signal below 1.5km altitude. Pollen optical depth retrieved from depolarization ratio was 0.036, 0.021, and 0.019 in 5, 6, and 7 May, respectively. Pollen particles mainly detected in daytime resulting increased aerosol optical depth and decrease of Angstrom exponent.

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

4-way Search Window for Improving The Memory Bandwidth of High-performance 2D PE Architecture in H.264 Motion Estimation (H.264 움직임추정에서 고속 2D PE 아키텍처의 메모리대역폭 개선을 위한 4-방향 검색윈도우)

  • Ko, Byung-Soo;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.6
    • /
    • pp.6-15
    • /
    • 2009
  • In this paper, a new 4-way search window is designed for the high-performance 2D PE architecture in H.264 Motion Estimation(ME) to improve the memory bandwidth. While existing 2D PE architectures reuse the overlapped data of adjacent search windows scanned in 1 or 3-way, the new window utilizes the overlapped data of adjacent search windows as well as adjacent multiple scanning (window) paths to enhance the reusage of retrieved search window data. In order to scan adjacent windows and multiple paths instead of single raster and zigzag scanning of adjacent windows, bidirectional row and column window scanning results in the 4-way(up. down, left, right) search window. The proposed 4-way search window could improve the reuse of overlapped window data to reduce the redundancy access factor by 3.1, though the 1/3-way search window redundantly requires $7.7{\sim}11$ times of data retrieval. Thus, the new 4-way search window scheme enhances the memory bandwidth by $70{\sim}58%$ compared with 1/3-way search window. The 2D PE architecture in H.264 ME for 4-way search window consists of $16{\times}16$ pe array. computing the absolute difference between current and reference frames, and $5{\times}16$ reusage array, storing the overlapped data of adjacent search windows and multiple scanning paths. The reference data could be loaded upward and downward into the new 2D PE depending on scanning direction, and the reusage array is combined with the pe array rotating left as well as right to utilize the overlapped data of adjacent multiple scan paths. In experiments, the new implementation of 4-way search window on Magnachip 0.18um could deal with the HD($1280{\times}720$) video of 1 reference frame, $48{\times}48$ search area and $16{\times}16$ macroblock by 30fps at 149.25MHz.

Development of a Retrieval Algorithm for Adjustment of Satellite-viewed Cloudiness (위성관측운량 보정을 위한 알고리즘의 개발)

  • Son, Jiyoung;Lee, Yoon-Kyoung;Choi, Yong-Sang;Ok, Jung;Kim, Hye-Sil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.415-431
    • /
    • 2019
  • The satellite-viewed cloudiness, a ratio of cloudy pixels to total pixels ($C_{sat,\;prev}$), inevitably differs from the "ground-viewed" cloudiness ($C_{grd}$) due to different viewpoints. Here we develop an algorithm to retrieve the satellite-viewed, but adjusted cloudiness to $C_{grd} (C_{sat,\;adj})$. The key process of the algorithm is to convert the cloudiness projected on the plane surface into the cloudiness on the celestial hemisphere from the observer. For this conversion, the supplementary satellite retrievals such as cloud detection and cloud top pressure are used as they provide locations of cloudy pixels and cloud base height information, respectively. The algorithm is tested for Himawari-8 level 1B data. The $C_{sat,\;adj}$ and $C_{sat,\;prev}$ are retrieved and validated with $C_{grd}$ of SYNOP station over Korea (22 stations) and China (724 stations) during only daytime for the first seven days of every month from July 2016 to June 2017. As results, the mean error of $C_{sat,\;adj}$ (0.61) is less that than that of $C_{sat,\;prev}$ (1.01). The percent of detection for 'Cloudy' scenario of $C_{sat,\;adj}$ (73%) is higher than that of $C_{sat,\;prev}$ (60%) The percent of correction, the accuracy, of $C_{sat,\;adj}$ is 61%, while that of $C_{sat,\;prev}$ is 55% for all seasons. For the December-January-February period when cloudy pixels are readily overestimated, the proportion of correction of $C_{sat,\;adj$ is 60%, while that of $C_{sat,\;prev}$ is 56%. Therefore, we conclude that the present algorithm can effectively get the satellite cloudiness near to the ground-viewed cloudiness.