• Title/Summary/Keyword: systems engineering

Search Result 44,678, Processing Time 0.072 seconds

Evaluation of $^{14}C$ Behavior Characteristic in Reactor Coolant from Korean PWR NPP's (국내 경수로형 원자로 냉각재 중의 $^{14}C$ 거동 특성 평가)

  • Kang, Duk-Won;Yang, Yang-Hee;Park, Kyong-Rok
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.7 no.1
    • /
    • pp.1-7
    • /
    • 2009
  • This study has been focused on determining the chemical composition of $^{14}C$ - in terms of both organic and inorganic $^{14}C$ contents - in reactor coolant from 3 different PWR's reactor type. The purpose was to evaluate the characteristic of $^{14}C$ that can serve as a basis for reliable estimation of the environmental release at domestic PWR sites. $^{14}C$ is the most important nuclide in the inventory, since it contributes one of the main dose contributors in future release scenarios. The reason for this is its high mobility in the environment, biological availability and long half-life(5730yr). More recent studies - where a more detailed investigation of organic $^{14}C$ species believed to be formed in the coolant under reducing conditions have been made - show that the organic compounds not only are limited to hydrocarbons and CO. Possible organic compounds formed including formaldehyde, formic acid and acetic acid, etc. Under oxidizing conditions shows the oxidized carbon forms, possibly mainly carbon dioxide and bicarbonate forms. Measurements of organic and inorganic $^{14}C$ in various water systems were also performed. The $^{14}C$ inventory in the reactor water was found to be 3.1 GBq/kg in PWR of which less than 10% was in inorganic form. Generally, the $^{14}C$ activity in the water was divided equally between the gas- and water- phase. Even though organic $^{14}C$ compound shows that dominant species during the reactor operation, But during the releasing of $^{14}C$ from the plant stack, chemical forms of $^{14}C$ shows the different composition due to the operation conditions such as temperature, pH, volume control tank venting and shut down chemistry.

  • PDF

Prediction of Forest Fire Danger Rating over the Korean Peninsula with the Digital Forecast Data and Daily Weather Index (DWI) Model (디지털예보자료와 Daily Weather Index (DWI) 모델을 적용한 한반도의 산불발생위험 예측)

  • Won, Myoung-Soo;Lee, Myung-Bo;Lee, Woo-Kyun;Yoon, Suk-Hee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.14 no.1
    • /
    • pp.1-10
    • /
    • 2012
  • Digital Forecast of the Korea Meteorological Administration (KMA) represents 5 km gridded weather forecast over the Korean Peninsula and the surrounding oceanic regions in Korean territory. Digital Forecast provides 12 weather forecast elements such as three-hour interval temperature, sky condition, wind direction, wind speed, relative humidity, wave height, probability of precipitation, 12 hour accumulated rain and snow, as well as daily minimum and maximum temperatures. These forecast elements are updated every three-hour for the next 48 hours regularly. The objective of this study was to construct Forest Fire Danger Rating Systems on the Korean Peninsula (FFDRS_KORP) based on the daily weather index (DWI) and to improve the accuracy using the digital forecast data. We produced the thematic maps of temperature, humidity, and wind speed over the Korean Peninsula to analyze DWI. To calculate DWI of the Korean Peninsula it was applied forest fire occurrence probability model by logistic regression analysis, i.e. $[1+{\exp}\{-(2.494+(0.004{\times}T_{max})-(0.008{\times}EF))\}]^{-1}$. The result of verification test among the real-time observatory data, digital forecast and RDAPS data showed that predicting values of the digital forecast advanced more than those of RDAPS data. The results of the comparison with the average forest fire danger rating index (sampled at 233 administrative districts) and those with the digital weather showed higher relative accuracy than those with the RDAPS data. The coefficient of determination of forest fire danger rating was shown as $R^2$=0.854. There was a difference of 0.5 between the national mean fire danger rating index (70) with the application of the real-time observatory data and that with the digital forecast (70.5).

Effect of Processing Additives on Vulcanization and Properties of EPDM Rubber (EPDM 고무의 첨가제에 따른 가류 및 물성에 미치는 영향 연구)

  • Lee, Soo;Bae, Joung Su
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.1
    • /
    • pp.173-185
    • /
    • 2018
  • Effects of three different types of dispersions and flow improving additives composed with fatty acid esters, fatty acid metal salts and amide compound on the vulcanization and the mechanical properties properties of rubber compounds of EPDM and carbon black as fillers. were investigated using Mooney viscometer, moving die rheometer, hardness tester, and universal test machine. The aging characteristics of vulcanized EPDM compounds were also investigated. The Mooney viscosity measured at $125^{\circ}C$ showed a tendency to decrease in the order of amide type> metal salt type > ester type additive. Scorch time showed little or no difference with the addition of ester or metal salt type additives, but the amide type additive shortened a scorch time more than one minute. Rheological measurement data obtained at $160^{\circ}C$ showed that the vulcanization time was faster for metal salt type and amide type additive systems. Delta torque values of EPDM compound increased with metal salt type and amide type additives, but slightly decreased with ester type additive. The tensile strength of the EPDM compound was greatly improved when an ester type additive was added, but the amide type or metal salt type additive had no significant effect. The elongation was significantly improved for metal salt type additive, while the rest were not significantly affected. The tear strength of the EPDM compounds increased with the addition of all kinds of additives, and it increased remarkably in the case of metal salt type additive. Hardness of the EPDM compounds was nearly same value regardless of additive types. The thermal aging of the EPDM blend at $100^{\circ}C$ for 24 h showed little change in the case of metal salt type or amide type additive, but the elongation tends to decrease by 10-20% for all EPDM compounds containing additives.

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Extracting Beginning Boundaries for Efficient Management of Movie Storytelling Contents (스토리텔링 콘텐츠의 효과적인 관리를 위한 영화 스토리 발단부의 자동 경계 추출)

  • Park, Seung-Bo;You, Eun-Soon;Jung, Jason J.
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.279-292
    • /
    • 2011
  • Movie is a representative media that can transmit stories to audiences. Basically, a story is described by characters in the movie. Different from other simple videos, movies deploy narrative structures for explaining various conflicts or collaborations between characters. These narrative structures consist of 3 main acts, which are beginning, middle, and ending. The beginning act includes 1) introduction to main characters and backgrounds, and 2) conflicts implication and clues for incidents. The middle act describes the events developed by both inside and outside factors and the story dramatic tension heighten. Finally, in the end act, the events are developed are resolved, and the topic of story and message of writer are transmitted. When story information is extracted from movie, it is needed to consider that it has different weights by narrative structure. Namely, when some information is extracted, it has a different influence to story deployment depending on where it locates at the beginning, middle and end acts. The beginning act is the part that exposes to audiences for story set-up various information such as setting of characters and depiction of backgrounds. And thus, it is necessary to extract much kind information from the beginning act in order to abstract a movie or retrieve character information. Thereby, this paper proposes a novel method for extracting the beginning boundaries. It is the method that detects a boundary scene between the beginning act and middle using the accumulation graph of characters. The beginning act consists of the scenes that introduce important characters, imply the conflict relationship between them, and suggest clues to resolve troubles. First, a scene that the new important characters don't appear any more should be detected in order to extract a scene completed the introduction of them. The important characters mean the major and minor characters, which can be dealt as important characters since they lead story progression. Extra should be excluded in order to extract a scene completed the introduction of important characters in the accumulation graph of characters. Extra means the characters that appear only several scenes. Second, the inflection point is detected in the accumulation graph of characters. It is the point that the increasing line changes to horizontal line. Namely, when the slope of line keeps zero during long scenes, starting point of this line with zero slope becomes the inflection point. Inflection point will be detected in the accumulation graph of characters without extra. Third, several scenes are considered as additional story progression such as conflicts implication and clues suggestion. Actually, movie story can arrive at a scene located between beginning act and middle when additional several scenes are elapsed after the introduction of important characters. We will decide the ratio of additional scenes for total scenes by experiment in order to detect this scene. The ratio of additional scenes is gained as 7.67% by experiment. It is the story inflection point to change from beginning to middle act when this ratio is added to the inflection point of graph. Our proposed method consists of these three steps. We selected 10 movies for experiment and evaluation. These movies consisted of various genres. By measuring the accuracy of boundary detection experiment, we have shown that the proposed method is more efficient.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.

Network Planning on the Open Spaces in Geumho-dong, Seoul (서울 금호동 오픈스페이스 네트워크 계획)

  • Kang, Yon-Ju;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.5
    • /
    • pp.51-62
    • /
    • 2012
  • Geumho-dong, Seoul, a redeveloped residential area, is located in the foothills of Mt. Eungbong. The geographical undulation, the composition of a large apartment complex, and the partial implementation of the redevelopment project have caused the severe physical and social disconnections in this area. In order to recover functioning in the disconnected community, this study pays attention to the regeneration of the open spaces as an everyday place and in the form a network system among those open spaces. Various types of the open spaces are classified into points or faces, 'bases' and linear 'paths' analyze the network status. More than half of the open space have connecting-distance of 500m or more. Furthermore, many areas are not even included in the service-area of the open spaces. Analysis of the connectivity and integration value using the axial map has carried out to check weak linkages and to choose the sections where additional bases are required. In addition, to improve the quality of the bases and the paths, a field investigation is conducted and problems are diagnosed. The network planning of the open spaces in Geumho-dong is established, ensuring the quality and quantity of bases and paths. The plan includes the construction of an additional major base in the central area and six secondary bases in other parts, and comes up with ways to improve the environment of underdeveloped secondary bases. In the neighborhood parks at Mt. Daehyun areas, the major path are added, and the environment of the paths is improved in certain areas. Because of the network planning, the connecting-distances between bases are reduced significantly, the connectivity and integration value of the area are increased, and the service areas of the open spaces cover the whole area properly. Although this study has some limitations such as the needs for the legal and institutional supports and difficulties of a quantitative indexing process, its significance lies in the suggestion of a more reasonable and practical plan for the overall network system by defining complex types of open spaces simply and clearly and by examining the organic relationships quantitatively and qualitatively.

A study on the implementation of Medical Telemetry systems using wireless public data network (무선공중망을 이용한 의료 정보 데이터 원격 모니터링 시스템에 관한 연구)

  • 이택규;김영길
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.278-283
    • /
    • 2000
  • As information communication technology developed we could check our blood pressure, pulsation electrocardiogram, SpO2 and blood test easily at home. To check our health at ordinary times is able though interlocking the house medical instrument with the wireless public data network This service will help the inconvenience to visit the hospital everytime and will save the individual's time and cost. In each house an organism data which is detected from the human body will be transmitted to the distance hospital and will be essentially applied through wireless public data network The medical information transmit system is utilized by wireless close range network It would transmit the obtained organism signal wirelessly from the personal device to the main center system in the hospital. Remote telemetry system is embodied by utilizing wireless media access protocol. The protocol is embodied by grafting CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance) protocol falling mode which is standards from IEEE 802.11. Among the house care telemetry system which could measure blood pressure, pulsation, electrocardiogram, SpO2 the study embodies the ECC(electrocardiograph) measure part. It within the ECC function into the movable device and add 900㎒ band wireless public data interface. Then the aged, the patients even anyone in the house could obtain ECG and keep, record the data. It would be essential to control those who had a health-examination heart diseases or more complicated heart diseases and to observe the latent heart disease patient continuously. To embody the medical information transmit system which is based on wireless network. It would transmit the ECG data among the organism signal data which would be utilized by wireless network modem and NCL(Native Control Language) protocol to contact through wireless network Through the SCR(Standard Context Routing) protocol in the network it will be connected to the wired host computer. The computer will check the recorded individual information and the obtained ECC data then send the correspond examination to the movable device. The study suggests the medical transmit system model utilized by the wireless public data network.

  • PDF