• Title/Summary/Keyword: software system

Search Result 12,098, Processing Time 0.046 seconds

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

The Application of 3D Bolus with Neck in the Treatment of Hypopharynx Cancer in VMAT (Hypopharynx Cancer의 VMAT 치료 시 Neck 3D Bolus 적용에 대한 유용성 평가)

  • An, Ye Chan;Kim, Jin Man;Kim, Chan Yang;Kim, Jong Sik;Park, Yong Chul
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.41-52
    • /
    • 2020
  • Purpose: To find out the dosimetric usefulness, setup reproducibility and efficiency of applying 3D Bolus by comparing two treatment plans in which Commercial Bolus and 3D Bolus produced by 3D Printing Technology were applied to the neck during VMAT treatment of Hypopahrynx Cancer to evaluate the clinical applicability. Materials and Methods: Based on the CT image of the RANDO phantom to which CB was applied, 3D Bolus were fabricated in the same form. 3D Bolus was printed with a polyurethane acrylate resin with a density of 1.2g/㎤ through the SLA technique using OMG SLA 660 Printer and MaterializeMagics software. Based on two CT images using CB and 3D Bolus, a treatment plan was established assuming VMAT treatment of Hypopharynx Cancer. CBCT images were obtained for each of the two established treatment plans 18 times, and the treatment efficiency was evaluated by measuring the setup time each time. Based on the obtained CBCT image, the adaptive plan was performed through Pinnacle, a computerized treatment planning system, to evaluate target, normal organ dose evaluation, and changes in bolus volume. Results: The setup time for each treatment plan was reduced by an average of 28 sec in the 3D Bolus treatment plan compared to the CB treatment plan. The Bolus Volume change during the pretreatment period was 86.1±2.70㎤ in 83.9㎤ of CB Initial Plan and 99.8±0.46㎤ in 92.2㎤ of 3D Bolus Initial Plan. The change in CTV Min Value was 167.4±19.38cGy in CB Initial Plan 191.6cGy and 149.5±18.27cGy in 3D Bolus Initial Plan 167.3cGy. The change in CTV Mean Value was 228.3±0.38cGy in CB Initial Plan 227.1cGy and 227.7±0.30cGy in 3D Bolus Initial Plan 225.9cGy. The change in PTV Min Value was 74.9±19.47cGy in CB Initial Plan 128.5cGy and 83.2±12.92cGy in 3D Bolus Initial Plan 139.9cGy. The change in PTV Mean Value was 226.2±0.83cGy in CB Initial Plan 225.4cGy and 225.8±0.33cGy in 3D Bolus Initial Plan 224.1cGy. The maximum value for the normal organ spinal cord was the same as 135.6cGy on average each time. Conclusion: From the experimental results of this paper, it was found that the application of 3D Bolus to the irregular body surface is more dosimetrically useful than the application of Commercial Bolus, and the setup reproducibility and efficiency are excellent. If further case studies along with research on the diversity of 3D printing materials are conducted in the future, the application of 3D Bolus in the field of radiation therapy is expected to proceed more actively.

Evaluation of Dose Distributions Recalculated with Per-field Measurement Data under the Condition of Respiratory Motion during IMRT for Liver Cancer (간암 환자의 세기조절방사선치료 시 호흡에 의한 움직임 조건에서 측정된 조사면 별 선량결과를 기반으로 재계산한 체내 선량분포 평가)

  • Song, Ju-Young;Kim, Yong-Hyeob;Jeong, Jae-Uk;Yoon, Mee Sun;Ahn, Sung-Ja;Chung, Woong-Ki;Nam, Taek-Keun
    • Progress in Medical Physics
    • /
    • v.25 no.2
    • /
    • pp.79-88
    • /
    • 2014
  • The dose distributions within the real volumes of tumor targets and critical organs during internal target volume-based intensity-modulated radiation therapy (ITV-IMRT) for liver cancer were recalculated by applying the effects of actual respiratory organ motion, and the dosimetric features were analyzed through comparison with gating IMRT (Gate-IMRT) plan results. The ITV was created using MIM software, and a moving phantom was used to simulate respiratory motion. The doses were recalculated with a 3 dose-volume histogram (3DVH) program based on the per-field data measured with a MapCHECK2 2-dimensional diode detector array. Although a sufficient prescription dose covered the PTV during ITV-IMRT delivery, the dose homogeneity in the PTV was inferior to that with the Gate-IMRT plan. We confirmed that there were higher doses to the organs-at-risk (OARs) with ITV-IMRT, as expected when using an enlarged field, but the increased dose to the spinal cord was not significant and the increased doses to the liver and kidney could be considered as minor when the reinforced constraints were applied during IMRT plan optimization. Because the Gate-IMRT method also has disadvantages such as unsuspected dosimetric variations when applying the gating system and an increased treatment time, it is better to perform a prior analysis of the patient's respiratory condition and the importance and fulfillment of the IMRT plan dose constraints in order to select an optimal IMRT method with which to correct the respiratory organ motional effect.

Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives (사용자의 패스워드 인증 행위 분석 및 피싱 공격시 대응방안 - 사용자 경험 및 HCI의 관점에서)

  • Ryu, Hong Ryeol;Hong, Moses;Kwon, Taekyoung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.79-90
    • /
    • 2014
  • User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.

INFRARED THERMOGRAPHIC ANALYSIS OF TEMPERATURE RISE ON THE SURFACE OF BUCHANAN PLUGGER (적외선열화상장치를 이용한 Buchanan plugger 표면의 온도상승 분석)

  • Choi, Sung-A;Kim, Sun-Ho;Hwang, Yun-Chan;Youn, Chang;Oh, Byung-Ju;Choi, Bo-Young;Juhng, Woo-Nam;Jeong, Sun-Wa;Hwang, In-Nam;Oh, Won-Mann
    • Restorative Dentistry and Endodontics
    • /
    • v.27 no.4
    • /
    • pp.370-381
    • /
    • 2002
  • This study was performed to evaluate the temperature rise on various position of the Buchanan plugger, the peak temperature of plugger's type and the temperature change by its touching time of heat control spling. The heat carrier system 'System B' (Model 1005, Analytic Technologies, USA) and the Buchanan's plug-gers of F, FM, M and ML sizes are used for this study. The temperature was set to 20$0^{\circ}C$ which Dr. Buchanan's "continuous wave of condensation" technique recommended on digital display and the power level on it was set to 10. In order to apply heat on the Buchanan's pluggers, the heat control spring was touched for 1, 2, 3, 4 and 5 seconds respectively. The temperature rise on the surface of the pluggers were measured at 0.5 mm intervals from tip to 20 mm length of shank using the infrared thermography (Radiation Thermometer-IR Temper, NEC San-ei Instruments, Ltd, Japan) and TH31-702 Data capture software program (NEC San-ei Instruments, Ltd, Japan). Data were analyzed using a one way ANOVA followed by Duncan's multiple range test and linear regression test. The results as follows. 1. The position at which temperature peaked was approximately at 0.5 mm to 1.5 mm far from the tip of Buchanan's pluggers (p<0.001). The temperature was constantly decreased toward the shank from the tip of it (p<0.001). 2. When the pluggerss were heated over 5 seconds, the peak temperature by time of measurement revealed from 253.3$\pm$10.5$^{\circ}C$ to 192.1$\pm$3.3$^{\circ}C$ in a touch for 1 sec, from 218.6$\pm$5.$0^{\circ}C$ to 179.5$\pm$4.2$^{\circ}C$ in a touch for 2 sec, from 197.5$\pm$3.$0^{\circ}C$ to 167.5$\pm$3.7$^{\circ}C$ in a touch for 3 sec, from 183.7$\pm$2.5$^{\circ}C$ to 159.8$\pm$3.6$^{\circ}C$ in a touch for 4 sec and from 164.9$\pm$2.$0^{\circ}C$ to 158.4$\pm$1.8$^{\circ}C$ in a touch for 5 sec. A touch for 1 sec showed the highest peak temperature, followed by, in descending order, 2 sec, 3 sec, 4 sec. A touch for 5 sec showed the lowest peak temperature (p<0.001). 3. A each type of pluggers showed different peak temperatures. The peak temperature was the highest in F type and followed by, in descending order, M type, ML type. FM type revealed the lowest peak temperature (p<0.001). The results of this study indicated that pluggers are designed to concentrate heat at around its tip, its actual temperature does not correlate well with the temperature which Buchanan's "continuous wave of condensation" technique recommend, and finally a quick touch of heat control spring for 1sec reveals the highest temperature rise.

A Study on Integrated Logistic Support (통합병참지원에 관한 연구)

  • 나명환;김종걸;이낙영;권영일;홍연웅;전영록
    • Proceedings of the Korean Reliability Society Conference
    • /
    • 2001.06a
    • /
    • pp.277-278
    • /
    • 2001
  • The successful operation of a product In service depends upon the effective provision of logistic support in order to achieve and maintain the required levels of performance and customer satisfaction. Logistic support encompasses the activities and facilities required to maintain a product (hardware and software) in service. Logistic support covers maintenance, manpower and personnel, training, spares, technical documentation and packaging handling, storage and transportation and support facilities.The cost of logistic support is often a major contributor to the Life Cycle Cost (LCC) of a product and increasingly customers are making purchase decisions based on lifecycle cost rather than initial purchase price alone. Logistic support considerations can therefore have a major impact on product sales by ensuring that the product can be easily maintained at a reasonable cost and that all the necessary facilities have been provided to fully support the product in the field so that it meets the required availability. Quantification of support costs allows the manufacturer to estimate the support cost elements and evaluate possible warranty costs. This reduces risk and allows support costs to be set at competitive rates.Integrated Logistic Support (ILS) is a management method by which all the logistic support services required by a customer can be brought together in a structured way and In harmony with a product. In essence the application of ILS:- causes logistic support considerations to be integrated into product design;- develops logistic support arrangements that are consistently related to the design and to each other;- provides the necessary logistic support at the beginning and during customer use at optimum cost.The method by which ILS achieves much of the above is through the application of Logistic Support Analysis (LSA). This is a series of support analysis tasks that are performed throughout the design process in order to ensure that the product can be supported efficiently In accordance with the requirements of the customer.The successful application of ILS will result in a number of customer and supplier benefits. These should include some or all of the following:- greater product uptime;- fewer product modifications due to supportability deficiencies and hence less supplier rework;- better adherence to production schedules in process plants through reduced maintenance, better support;- lower supplier product costs;- Bower customer support costs;- better visibility of support costs;- reduced product LCC;- a better and more saleable product;- Improved safety;- increased overall customer satisfaction;- increased product purchases;- potential for purchase or upgrade of the product sooner through customer savings on support of current product.ILS should be an integral part of the total management process with an on-going improvement activity using monitoring of achieved performance to tailor existing support and influence future design activities. For many years, ILS was predominantly applied to military procurement, primarily using standards generated by the US Government Department of Defense (DoD). The military standards refer to specialized government infrastructures and are too complex for commercial application. The methods and benefits of ILS, however, have potential for much wider application in commercial and civilian use. The concept of ILS is simple and depends on a structured procedure that assures that logistic aspects are fully considered throughout the design and development phases of a product, in close cooperation with the designers. The ability to effectively support the product is given equal weight to performance and is fully considered in relation to its cost.The application of ILS provides improvements in availability, maintenance support and longterm 3ogistic cost savings. Logistic costs are significant through the life of a system and can often amount to many times the initial purchase cost of the system.This study provides guidance on the minimum activities necessary to Implement effective ILS for a wide range of commercial suppliers. The guide supplements IEC60106-4, Guide on maintainability of equipment Part 4: Section Eight maintenance and maintenance support planning, which emphasizes the maintenance aspects of the support requirements and refers to other existing standards where appropriate. The use of Reliability and Maintainability studies is also mentioned in this study, as R&M is an important interface area to ILS.

  • PDF

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.