• Title/Summary/Keyword: Paper Retrieval

Search Result 2,125, Processing Time 0.026 seconds

4-way Search Window for Improving The Memory Bandwidth of High-performance 2D PE Architecture in H.264 Motion Estimation (H.264 움직임추정에서 고속 2D PE 아키텍처의 메모리대역폭 개선을 위한 4-방향 검색윈도우)

  • Ko, Byung-Soo;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.6
    • /
    • pp.6-15
    • /
    • 2009
  • In this paper, a new 4-way search window is designed for the high-performance 2D PE architecture in H.264 Motion Estimation(ME) to improve the memory bandwidth. While existing 2D PE architectures reuse the overlapped data of adjacent search windows scanned in 1 or 3-way, the new window utilizes the overlapped data of adjacent search windows as well as adjacent multiple scanning (window) paths to enhance the reusage of retrieved search window data. In order to scan adjacent windows and multiple paths instead of single raster and zigzag scanning of adjacent windows, bidirectional row and column window scanning results in the 4-way(up. down, left, right) search window. The proposed 4-way search window could improve the reuse of overlapped window data to reduce the redundancy access factor by 3.1, though the 1/3-way search window redundantly requires $7.7{\sim}11$ times of data retrieval. Thus, the new 4-way search window scheme enhances the memory bandwidth by $70{\sim}58%$ compared with 1/3-way search window. The 2D PE architecture in H.264 ME for 4-way search window consists of $16{\times}16$ pe array. computing the absolute difference between current and reference frames, and $5{\times}16$ reusage array, storing the overlapped data of adjacent search windows and multiple scanning paths. The reference data could be loaded upward and downward into the new 2D PE depending on scanning direction, and the reusage array is combined with the pe array rotating left as well as right to utilize the overlapped data of adjacent multiple scan paths. In experiments, the new implementation of 4-way search window on Magnachip 0.18um could deal with the HD($1280{\times}720$) video of 1 reference frame, $48{\times}48$ search area and $16{\times}16$ macroblock by 30fps at 149.25MHz.

Some Instances of Manchurian Naturalization and Settlement in Choson Dynasty (향화인의 조선 정착 사례 연구 - 여진 향화인을 중심으로 -)

  • Won, Chang-Ae
    • (The)Study of the Eastern Classic
    • /
    • no.37
    • /
    • pp.33-61
    • /
    • 2009
  • In the late Koryo period, until 14th century, there had been at least two groups of Manchurians who were conferred citizenships; one group was living as an original inhabitant in the coastal area of north­eastern part of Korean peninsular, long time ago, and they were over one thousand households. The other was coming down from inland, eastern part of Yoha River, to the area of Tuman River to settle down and they were at least around one hundred and sixty households, including such tribes as Al-tha-ry, Ol-lyang-hap, Ol-jok-hap and others. They were treated courteously, from the early days of Choson dynasty, with governmental policies in an economic, political, and social ways. They were given, for instance, a house, a land, household furniture, and clothes. They were allowed to get marry with a native Korean to settle down. They were educated how to cultivate their lands. It was also possible for them to be given an official position politically or allowed to take a National Civil Official Examination. The fact they could take such an Examination, in particular, means they were treated fairly and equally, because they also had a privilege to improve their social positions through the formal system as much as common people. Two typical families were scrutinized, in this paper, family Chong-hae Lee and family Chon-ju Ju. All of them were successful to settle down with different backgrounds each other. The former were from a headman, Lee Jee-ran, who controlled his tribe, over five hundred households. He was given three titles of a meritorious retainer at the founding of Chosun dynasty, at the retrieval of armies, and an enshrined retainer. His son, Lee Wha-yong, was also given a vassal of merit who kept a close tie successfully with the king's family through a marriage. Upon the foundation of their ancestors, their grandsons, family Lee Hyo-yang and family Lee Hyo-gang, each, had taken solid root as an aristocratic Yang-ban class. The former became a high officer family, generation by generation, while the latter changed into a civil official family through Civil Official Examinations. They lived mainly around Seoul, Kyong-gi Province and some lived in their original places, Ham-kyong Province. Chu-man, the first ancestor, was given a meritorious retainer at the founding of the dynasty and Chu-in was also given a high officer position from the government. They kept living at the original place, Ham-heung, Ham-kyong Province, and then became an outstanding local family there. They began to pass the Civil Official Examinations. After 17th century on the passers were 17 in Civil Official Examinations and 40 were passed in lower civil examinations. The positions in government they attained usually were remonstrance which position was prohibited particularly to North­Western people at that time. The Chosun dynasty was open to Machurians widely through the system of envoy, convoy, and naturalization. It was intended to build up an enclosure policy through a friendly diplomatic relation with them against any possible invasion from outside. This is one reason why they were supported fully that much in a various way.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Gridded Expansion of Forest Flux Observations and Mapping of Daily CO2 Absorption by the Forests in Korea Using Numerical Weather Prediction Data and Satellite Images (국지예보모델과 위성영상을 이용한 극상림 플럭스 관측의 공간연속면 확장 및 우리나라 산림의 일일 탄소흡수능 격자자료 산출)

  • Kim, Gunah;Cho, Jaeil;Kang, Minseok;Lee, Bora;Kim, Eun-Sook;Choi, Chuluong;Lee, Hanlim;Lee, Taeyun;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1449-1463
    • /
    • 2020
  • As recent global warming and climate changes become more serious, the importance of CO2 absorption by forests is increasing to cope with the greenhouse gas issues. According to the UN Framework Convention on Climate Change, it is required to calculate national CO2 absorptions at the local level in a more scientific and rigorous manner. This paper presents the gridded expansion of forest flux observations and mapping of daily CO2 absorption by the forests in Korea using numerical weather prediction data and satellite images. To consider the sensitive daily changes of plant photosynthesis, we built a machine learning model to retrieve the daily RACA (reference amount of CO2 absorption) by referring to the climax forest in Gwangneung and adopted the NIFoS (National Institute of Forest Science) lookup table for the CO2 absorption by forest type and age to produce the daily AACA (actual amount of CO2 absorption) raster data with the spatial variation of the forests in Korea. In the experiment for the 1,095 days between Jan 1, 2013 and Dec 31, 2015, our RACA retrieval model showed high accuracy with a correlation coefficient of 0.948. To achieve the tier 3 daily statistics for AACA, long-term and detailed forest surveying should be combined with the model in the future.

Gap-Filling of Sentinel-2 NDVI Using Sentinel-1 Radar Vegetation Indices and AutoML (Sentinel-1 레이더 식생지수와 AutoML을 이용한 Sentinel-2 NDVI 결측화소 복원)

  • Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Yungyo Im;Youngmin Seo;Myoungsoo Won;Junghwa Chun;Kyungmin Kim;Keunchang Jang;Joongbin Lim;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1341-1352
    • /
    • 2023
  • The normalized difference vegetation index (NDVI) derived from satellite images is a crucial tool to monitor forests and agriculture for broad areas because the periodic acquisition of the data is ensured. However, optical sensor-based vegetation indices(VI) are not accessible in some areas covered by clouds. This paper presented a synthetic aperture radar (SAR) based approach to retrieval of the optical sensor-based NDVI using machine learning. SAR system can observe the land surface day and night in all weather conditions. Radar vegetation indices (RVI) from the Sentinel-1 vertical-vertical (VV) and vertical-horizontal (VH) polarizations, surface elevation, and air temperature are used as the input features for an automated machine learning (AutoML) model to conduct the gap-filling of the Sentinel-2 NDVI. The mean bias error (MAE) was 7.214E-05, and the correlation coefficient (CC) was 0.878, demonstrating the feasibility of the proposed method. This approach can be applied to gap-free nationwide NDVI construction using Sentinel-1 and Sentinel-2 images for environmental monitoring and resource management.