• Title/Summary/Keyword: increasing set

Search Result 1,876, Processing Time 0.033 seconds

A Study on the Application Effect of Central-Grid PV System at a Streetlamp using RETScreen - A Case Study of Gwangjin-gu - (RETScreen을 이용한 가로등의 계통연계형 태양광시스템 적용 효과 분석 - 서울시 광진구를 중심으로 -)

  • Kang, Seongmin;Choi, Bong-Seok;Kim, Seungjin;Mun, Hyo-dong;Lee, Jeongwoo;Park, Nyun-Bae;Jeon, Eui-Chan
    • Journal of Climate Change Research
    • /
    • v.5 no.1
    • /
    • pp.1-12
    • /
    • 2014
  • With continued economic growth, Korea has seen an increase in the nighttime activities of its citizens as hours of activity have extended into night. There is an increasing trend in energy consumption related to citizens' nighttime activities. In order to analyze ideas for an efficient replacement of the power consumption of streetlights and for profit generation by applying grid-type solar systems, this study used an RETScreen model. Through energy analysis and cost analysis, the application benefit and viability of grid-type solar street light systems were analyzed. With analysis result of a total weekly power generation of 114 kWh via a grid-connected solar streetlight system, it was shown that the net present value of a grid-connected solar street light system is 155,362 KRW, which would mean a payback period of about 5.2 years, and as such, it was shown that profit could be generated after about 6 years. In addition, if the grid-connected solar power generation system proposed by this study is to be applied, it was shown that 401,935 KRW in profit could be generated after the 20-year useful life set for the solar system. In addition, the sensitivity analysis was performed taking into account the price fluctuations of SMP, maintenance. As a result, a payback period has increased by 1~2 years, and there were no significant differences. Because the most important factor that affect the economic analysis is the cost of supply certification of renewable energy, a stable sales and acquisition of this certification are very important. the Seoul-type Feed in Tariff(FIT) connected to other institutions will enable steady sales by confirming to purchase the certification for 12 years. Therefore, if those issues mentioned above are properly reflected, Central-grid PV system project will be able to perform well in the face of unfavorable condition of solar PV installation.

Development Strategy for New Climate Change Scenarios based on RCP (온실가스 시나리오 RCP에 대한 새로운 기후변화 시나리오 개발 전략)

  • Baek, Hee-Jeong;Cho, ChunHo;Kwon, Won-Tae;Kim, Seong-Kyoun;Cho, Joo-Young;Kim, Yeongsin
    • Journal of Climate Change Research
    • /
    • v.2 no.1
    • /
    • pp.55-68
    • /
    • 2011
  • The Intergovernmental Panel on Climate Change(IPCC) has identified the causes of climate change and come up with measures to address it at the global level. Its key component of the work involves developing and assessing future climate change scenarios. The IPCC Expert Meeting in September 2007 identified a new greenhouse gas concentration scenario "Representative Concentration Pathway(RCP)" and established the framework and development schedules for Climate Modeling (CM), Integrated Assessment Modeling(IAM), Impact Adaptation Vulnerability(IAV) community for the fifth IPCC Assessment Reports while 130 researchers and users took part in. The CM community at the IPCC Expert Meeting in September 2008, agreed on a new set of coordinated climate model experiments, the phase five of the Coupled Model Intercomparison Project(CMIP5), which consists of more than 30 standardized experiment protocols for the shortterm and long-term time scales, in order to enhance understanding on climate change for the IPCC AR5 and to develop climate change scenarios and to address major issues raised at the IPCC AR4. Since early 2009, fourteen countries including the Korea have been carrying out CMIP5-related projects. Withe increasing interest on climate change, in 2009 the COdinated Regional Downscaling EXperiment(CORDEX) has been launched to generate regional and local level information on climate change. The National Institute of Meteorological Research(NIMR) under the Korea Meteorological Administration (KMA) has contributed to the IPCC AR4 by developing climate change scenarios based on IPCC SRES using ECHO-G and embarked on crafting national scenarios for climate change as well as RCP-based global ones by engaging in international projects such as CMIP5 and CORDEX. NIMR/KMA will make a contribution to drawing the IPCC AR5 and will develop national climate change scenarios reflecting geographical factors, local climate characteristics and user needs and provide them to national IAV and IAM communites to assess future regional climate impacts and take action.

A Study for Activation Measure of Climate Change Mitigation Movement - A Case Study of Green Start Movement - (기후변화 완화 활동 활성화 방안에 관한 연구 - 그린스타트 운동을 중심으로 -)

  • Cho, Sung Heum;Lee, Sang Hoon;Moon, Tae Hoon;Choi, Bong Seok;Park, Na Hyun;Jeon, Eui Chan
    • Journal of Climate Change Research
    • /
    • v.5 no.2
    • /
    • pp.95-107
    • /
    • 2014
  • The 'Green Start Movement' is a practical movement of green living to efficiently reduce the greenhouse gases originating from non-industrial fields such as household, commerce, transportation, etc. for the 'materialization of a low carbon society through green growth (Low Carbon, Green Korea)'. When the new government took office, following the Lee Myeongbak Administration that had presented 'Low Carbon, Green Growth' as a national vision, it was required to set up the direction of the practical movement of green life to respond to climate change persistently and stably as well as to evaluate the performance of the green start movement over the past 5 years. A questionnaire survey was administered to a total of 265 persons including public servants, members of environmental and non-environmental NGOs, participants of the green start movement and professionals. In the results of the questionnaire survey, many opinions have indicated that the awareness of the green start movement is increasing and the green start movement has had a positive impact on individual behavior and group behavior in terms of green living. The result shows, however, that the environmental NGOs don't cooperate sufficiently to create a 'green living' effect on a national scale. Action needs to be taken on the community level in order to generate a culture of environmental responsibility. The national administration office of the Green Start Movement Network should play the leading role between the government and environmental NGOs. The Green Start National Network should have greater autonomy and governance of the network needs to be restructured in order to work effectively. Also the Green Start Movement should identify specific local characteristics to support activities that reduce greenhouse gas emissions. Best practices can be shared to reduce greenhouse gas emissions by a substantial amount.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Comparison Evaluation of Image Quality with Different Thickness of Aluminum added Filter using GATE Simulation in Digital Radiography (GATE 시뮬레이션을 사용한 알루미늄 부가필터 두께에 따른 Digital Radiography의 영상 화질 비교 평가)

  • Oh, Minju;Hong, Joo-Wan;Lee, Youngjin
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.1
    • /
    • pp.81-86
    • /
    • 2019
  • In X-ray image, the role of filtration through the filter is to reduce the exposure of the patient by using photon which is useful in formation of the image, and at the same time, enhance the contrast of the image. During interaction between photon and object, low energy X-rays are absorbed from the site of a few cm of the first patient's tissue, and high energy X-rays are the one which form the image. Therefore, the radiation filter absorbs low energy X-ray in order to lower the exposure of the patient and improve the quality of the image. The purpose of this study is to compare the effect on the image quality by differences of added filter through simulation image and actual radiation image. For that purpose, we used Geant4 Application for Tomographic Emission (GATE) as a tool for Monte Carlo simulation. We set actual size, shape and material of Polymethylmethacrylate (PMMA) Phantom on GATE and differentiated the parameter of added filter. Also, we took image of PMMA phantom with same parameter of added filter by digital radiography (DR). Than we performed contrast-to-noise ratio (CNR) evaluation on both simulation image and actual DR image by Image J. Finally, we observed the effect on image quality due to different thickness of added filter, and compared two images' CNR evaluation's transitions of change. The result of this experiment showed decreasing in the progress of CNR on both DR and simulation image. It is ultimately caused by decreasing in contrast on image. In theory, contrast decrease with kVp increased. Given that condition, this study found out that filter makes not only decreasing total dose by absorbing low energy of X-ray, but also increasing average energy of X-ray.

Influence of heading date difference on gene flow from GM to non-GM rices (GM벼에서 non-GM벼로 유전자 이동에 대한 개화기 차이의 영향 분석)

  • Oh, Sung-Dug;Chang, Ancheol;Kim, Boeun;Sohn, Soo-In;Yun, Doh-Won
    • Journal of the Korean Society of International Agriculture
    • /
    • v.30 no.4
    • /
    • pp.347-356
    • /
    • 2018
  • Genetically modified (GM) crops have been increased continuously over the world and concerns about the potential risks of GM crops have also been increasing. Even though GM crops have not been cultivated commercially in Korea, it should be necessary to develop the safety assesment technology for GM crops. In this study, we investigated the influence of heading date difference on gene flow from GM to non-GM rice. In the experimental plot design, The PAC GM rice was placed in the center as a pollen donor and non-GM rice were placed in eight directions as pollen receivers. Five pollen receiver rice cultivars were Unkawng, Daebo, Saegyejinmi, Nakdong-byeo, and Ilmi which had different flowering times. A total of 266,436, 300,237, 305,223, 273,373, and 290,759 seeds were collected from Unkawng, Daebo, Saegyejinmi, Nakdong, and Ilmi, respectively, which were planted around PAC GM rice. The GM${\times}$non-GM hybrids were detected by repeated spraying of herbicide and PAT immunostrip assay. Finally, the hybrids were confirmed by PCR analysis using PAC gene specific primer. The hybrids were found in Nakdong-byeo which had the same heading date with PAC GM rice. The hybridization rate was 0.0007% at Nakdong-byeo plot. All of GM${\times}$non-GM hybrids were located within 2 m distance from the PAC GM rice zone. The physiological elements including rice heading date were found to be important factors to determine GM?rice out crossing rate with GM rice. Consideration should be taken into for many factors like the physiological elements of field heading date of rice cultivars to set up the safety management guideline for prevention of GM rice gene flow.

Calculation of future rainfall scenarios to consider the impact of climate change in Seoul City's hydraulic facility design standards (서울시 수리시설 설계기준의 기후변화 영향 고려를 위한 미래강우시나리오 산정)

  • Yoon, Sun-Kwon;Lee, Taesam;Seong, Kiyoung;Ahn, Yujin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.6
    • /
    • pp.419-431
    • /
    • 2021
  • In Seoul, it has been confirmed that the duration of rainfall is shortened and the frequency and intensity of heavy rains are increasing with a changing climate. In addition, due to high population density and urbanization in most areas, floods frequently occur in flood-prone areas for the increase in impermeable areas. Furthermore, the Seoul City is pursuing various projects such as structural and non-structural measures to resolve flood-prone areas. A disaster prevention performance target was set in consideration of the climate change impact of future precipitation, and this study conducted to reduce the overall flood damage in Seoul for the long-term. In this study, 29 GCMs with RCP4.5 and RCP8.5 scenarios were used for spatial and temporal disaggregation, and we also considered for 3 research periods, which is short-term (2006-2040, P1), mid-term (2041-2070, P2), and long-term (2071-2100, P3), respectively. For spatial downscaling, daily data of GCM was processed through Quantile Mapping based on the rainfall of the Seoul station managed by the Korea Meteorological Administration and for temporal downscaling, daily data were downscaled to hourly data through k-nearest neighbor resampling and nonparametric temporal detailing techniques using genetic algorithms. Through temporal downscaling, 100 detailed scenarios were calculated for each GCM scenario, and the IDF curve was calculated based on a total of 2,900 detailed scenarios, and by averaging this, the change in the future extreme rainfall was calculated. As a result, it was confirmed that the probability of rainfall for a duration of 100 years and a duration of 1 hour increased by 8 to 16% in the RCP4.5 scenario, and increased by 7 to 26% in the RCP8.5 scenario. Based on the results of this study, the amount of rainfall designed to prepare for future climate change in Seoul was estimated and if can be used to establish purpose-wise water related disaster prevention policies.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

A Study on Agrifood Purchase Decision-making and Online Channel Selection according to Consumer Characteristics, Perceived Risks, and Eating Lifestyles (소비자 특성, 지각된 위험, 식생활 라이프스타일에 따른 농식품 구매결정 및 온라인 구매채널 선택에 관한 연구)

  • Lee, Myoung-Kwan;Park, Sang-Hyeok;Kim, Yeon-Jong
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.16 no.1
    • /
    • pp.147-159
    • /
    • 2021
  • After the 2020 Corona 19 pandemic, consumers' online consumption is increasing rapidly, and non-store online retail channels are showing high growth. In particular, social media is gaining its status as a social media market where direct transactions take place in the means of promoting companies' brands and products. In this study, changes in consumer behavior after the Corona 19 pandemic are different in choosing online shopping media such as existing online shopping malls and SNS markets that can be classified into open social media and closed social media when purchasing agri-food online. We tried to find out what type of product is preferred in the selection of agri-food products. For this study, demographic characteristics of consumers, perceived risk of consumers, and dietary lifestyle were set as independent variables to investigate the effect on online shopping media type and product selection. The summary of the empirical analysis results is as follows. When consumers purchase agri-food online, there are significant differences in demographic characteristics, consumer perception risks, and detailed factors of dietary lifestyle in selecting shopping channels such as online shopping malls, open social media, and closed social media. Appeared to be. The consumers who choose the open SNS market are higher in men than in women, with lower household income, and higher in consumers seeking health and taste. Consumers who choose the closed SNS market were analyzed as consumers who live in rural areas and have a high degree of risk perception for delivery. Consumers who choose existing online shopping malls have high educational background, high personal income, and high consumers seeking taste and economy. Through this study, we tried to provide practical assistance by providing a basis for judgment to farmers who have difficulty in selecting an online shopping medium suitable for their product characteristics. As a shopping channel for agri-food, social media is not a simple promotional channel, but a direct transaction. It can be differentiated from existing studies in that it is approached as a market that arises.