• Title/Summary/Keyword: 가중치부여 기법

Search Result 264, Processing Time 0.029 seconds

Pattern Analysis of Traffic Accident data and Prediction of Victim Injury Severity Using Hybrid Model (교통사고 데이터의 패턴 분석과 Hybrid Model을 이용한 피해자 상해 심각도 예측)

  • Ju, Yeong Ji;Hong, Taek Eun;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.75-82
    • /
    • 2016
  • Although Korea's economic and domestic automobile market through the change of road environment are growth, the traffic accident rate has also increased, and the casualties is at a serious level. For this reason, the government is establishing and promoting policies to open traffic accident data and solve problems. In this paper, describe the method of predicting traffic accidents by eliminating the class imbalance using the traffic accident data and constructing the Hybrid Model. Using the original traffic accident data and the sampled data as learning data which use FP-Growth algorithm it learn patterns associated with traffic accident injury severity. Accordingly, In this paper purpose a method for predicting the severity of a victim of a traffic accident by analyzing the association patterns of two learning data, we can extract the same related patterns, when a decision tree and multinomial logistic regression analysis are performed, a hybrid model is constructed by assigning weights to related attributes.

Academic Conference Categorization According to Subjects Using Topical Information Extraction from Conference Websites (학회 웹사이트의 토픽 정보추출을 이용한 주제에 따른 학회 자동분류 기법)

  • Lee, Sue Kyoung;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.61-77
    • /
    • 2017
  • Recently, the number of academic conference information on the Internet has rapidly increased, the automatic classification of academic conference information according to research subjects enables researchers to find the related academic conference efficiently. Information provided by most conference listing services is limited to title, date, location, and website URL. However, among these features, the only feature containing topical words is title, which causes information insufficiency problem. Therefore, we propose methods that aim to resolve information insufficiency problem by utilizing web contents. Specifically, the proposed methods the extract main contents from a HTML document collected by using a website URL. Based on the similarity between the title of a conference and its main contents, the topical keywords are selected to enforce the important keywords among the main contents. The experiment results conducted by using a real-world dataset showed that the use of additional information extracted from the conference websites is successful in improving the conference classification performances. We plan to further improve the accuracy of conference classification by considering the structure of websites.

A Study on Suitability Mapping for Artificial Reef Facility using Satellite Remotely Sensed Imagery and GIS (위성원격탐사자료와 GIS를 이용한 인공어초 시설지 적지 선정 공간분포도 작성 연구)

  • 조명희;김병석;서영상
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.99-109
    • /
    • 2001
  • In order to establish effective fishing ground environment equipment and artificial reef in coastal area, the methodology to select the most suitable area for artificial reef should be applied after analyzing the correlation between fishing ground environment and ocean environment. In this paper, thematic maps were prepared by using satellite remote sensing and GIS for the sea surface temperature, chlorophyll, transparency, the depth of sea water and the condition of submarine geologic which are considered as the most elements when selecting suitable area for artificial reef in Tong-Yong bay. Then, the most suitable area for artificial reef was selected by giving weight score depending on the suitable condition of this area and analyzing spatial data. The results showed it makes possible for this methodology, which selects the suitable area for artificial reef using satellite remote sensing and GIS, to manage the institution of artificial reef more entirely and efficiently through analyzing and visualizing.

Analyzing Correlations between Movie Characters Based on Deep Learning

  • Jin, Kyo Jun;Kim, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.9-17
    • /
    • 2021
  • Humans are social animals that have gained information or social interaction through dialogue. In conversation, the mood of the word can change depending on the sensibility of one person to another. Relationships between characters in films are essential for understanding stories and lines between characters, but methods to extract this information from films have not been investigated. Therefore, we need a model that automatically analyzes the relationship aspects in the movie. In this paper, we propose a method to analyze the relationship between characters in the movie by utilizing deep learning techniques to measure the emotion of each character pair. The proposed method first extracts main characters from the movie script and finds the dialogue between the main characters. Then, to analyze the relationship between the main characters, it performs a sentiment analysis, weights them according to the positions of the metabolites in the entire time intervals and gathers their scores. Experimental results with real data sets demonstrate that the proposed scheme is able to effectively measure the emotional relationship between the main characters.

Comparison of Forest Growing Stock Estimates by Distance-Weighting and Stratification in k-Nearest Neighbor Technique (거리 가중치와 층화를 이용한 최근린기반 임목축적 추정치의 정확도 비교)

  • Yim, Jong Su;Yoo, Byung Oh;Shin, Man Yong
    • Journal of Korean Society of Forest Science
    • /
    • v.101 no.3
    • /
    • pp.374-380
    • /
    • 2012
  • The k-Nearest Neighbor (kNN) technique is popularly applied to assess forest resources at the county level and to provide its spatial information by combining large area forest inventory data and remote sensing data. In this study, two approaches such as distance-weighting and stratification of training dataset, were compared to improve kNN-based forest growing stock estimates. When compared with five distance weights (0 to 2 by 0.5), the accuracy of kNN-based estimates was very similar ranged ${\pm}0.6m^3/ha$ in mean deviation. The training dataset were stratified by horizontal reference area (HRA) and forest cover type, which were applied by separately and combined. Even though the accuracy of estimates by combining forest cover type and HRA- 100 km was slightly improved, that by forest cover type was more efficient with sufficient number of training data. The mean of forest growing stock based kNN with HRA-100 and stratification by forest cover type when k=7 were somewhat underestimated ($5m^3/ha$) compared to statistical yearbook of forestry at 2011.

A Content-based Video Rate-control Algorithm Interfaced to Human-eye (인간과 결합한 내용기반 동영상 율제어)

  • 황재정;진경식;황치규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.3C
    • /
    • pp.307-314
    • /
    • 2003
  • In the general multiple video object coder, more interested objects such as speaker or moving object is consistently coded with higher priority. Since the priority of each object may not be fixed in the whole sequence and be variable on frame basis, it must be adjusted in a frame. In this paper, we analyze the independent rate control algorithm and global algorithm that the QP value is controled by the static parameters, object importance or priority, target PSNR, weighted distortion. The priority among static parameters is analyzed and adjusted into dynamic parameters according to the visual interests or importance obtained by camera interface. Target PSNR and weighted distortion are proportionally derived by using magnitude, motion, and distortion. We apply those parameters for the weighted distortion control and the priority-based control resulting in the efficient bit-rate distribution. As results of this paper, we achieved that fewer bits are allocated for video objects which has less importance and more bits for those which has higher visual importance. The duration of stability in the visual quality is reduced to less than 15 frames of the coded sequence. In the aspect of PSNR, the proposed scheme shows higher quality of more than 2d13 against the conventional schemes. Thus the coding scheme interfaced to human- eye proves an efficient video coder dealing with the multiple number of video objects.

Study on Extraction of Keywords Using TF-IDF and Text Structure of Novels (TF-IDF와 소설 텍스트의 구조를 이용한 주제어 추출 연구)

  • You, Eun-Soon;Choi, Gun-Hee;Kim, Seung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.2
    • /
    • pp.121-129
    • /
    • 2015
  • With the explosive growth of information about books, there is a growing number of customers who find it difficult to pick a book. Against the backdrop, the importance of a book recommendation system becomes greater, through which appropriate information about books could be offered then to encourage customers to buy a book in the end. However, existing recommendation systems based on the bibliographical information or user data reveal the reliability issue found in their recommendation results. This is why it is necessary to reflect semantic information extracted from the texts of a book's main body in a recommendation system. Accordingly, this paper suggests a method for extracting keywords from the main body of novels, as a preceding research, by using TF-IDF method as well as the text structure. To this end, the texts of 100 novels have been collected then to divide them into four structural elements of preface, dialogue, non-dialogue and closing. Then, the TF-IDF weight of each keyword has been calculated. The calculation results show that the extraction accuracy of keywords improves by 42.1% in performance when more weight is given to dialogue while including preface and closing instead of using just the main body.

Prioritization of Intermodal Transportation Facilities with Considering the Budget Rate Constraints of Focal Terminal Types (교통물류거점유형별 예산비율을 고려한 연계교통시설 투자우선순위 분석)

  • Oh, Seichang;Lee, Jungwoo;Lee, Kyujin;Choi, Keechoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.4D
    • /
    • pp.361-368
    • /
    • 2010
  • It is general that mostly congested sections of national backbone networks have been improved based on the national network expansion plan. However, in case of intermodal terminals which are origins of logistics, it is still so congested that travel time between origin and destination is long. Therefore, intermodal transportation systems plan of major intermodal terminals for the intermodal connector networks between intermodal terminal and national backbone network or intermodal terminal was established. With the limitation of priority methodology applying to intermodal connector facility under existing methodology, this study suggests an improved priority methodology. This study includes characteristics of terminal on the hierarchical structure and assessment list, but it does not concentrate on the specific terminal type through survey. To avoid a certain concentration, budget constraint for each terminal type was considered ahead of priority. Finally priority methodology was developed with two-step assessment under consideration that specific terminal is not involved in intermodal connector facility project. As a result of calculating weights by survey, effects such as d/c and accessibility fluctuations index through project implementation gain high weight, and degree of region underdevelopment gets next. Although the methodology in this study could not yields the priority by assessment list, it will be useful for setting the direction on policy related to intermodal connector facility projects.

Improvement of Personalized Diagnosis Method for U-Health (U-health 개인 맞춤형 질병예측 기법의 개선)

  • Min, Byoung-Won;Oh, Yong-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.10
    • /
    • pp.54-67
    • /
    • 2010
  • Applying the conventional machine-learning method which has been frequently used in health-care area has several fundamental problems for modern U-health service analysis. First of all, we are still lack of application examples of the traditional method for our modern U-health environment because of its short term history of U-health study. Second, it is difficult to apply the machine-learning method to our U-health service environment which requires real-time management of disease because the method spends a lot of time in the process of learning. Third, we cannot implement a personalized U-health diagnosis system using the conventional method because there is no way to assign weights on the disease-related variables although various kinds of machine-learning schemes have been proposed. In this paper, a novel diagnosis scheme PCADP is proposed to overcome the problems mentioned above. PCADP scheme is a personalized diagnosis method and it makes the bio-data analysis just a 'process' in the U-health service system. In addition, we offer a semantics modeling of the U-health ontology framework in order to describe U-health data and service specifications as meaningful representations based on this PCADP. The PCADP scheme is a kind of statistical diagnosis method which has characteristics of flexible structure, real-time processing, continuous improvement, and easy monitoring of decision process. Upto the best of authors' knowledge, the PCADP scheme and ontology framework proposed in this paper reveals one of the best characteristics of flexible structure, real-time processing, continuous improvement, and easy monitoring among recently developed U-health schemes.

Effective Prioritized HRW Mapping in Heterogeneous Web Server Cluster (이질적 웹 서버 클러스터 환경에서 효율적인 우선순위 가중치 맵핑)

  • 김진영;김성천
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.12
    • /
    • pp.708-713
    • /
    • 2003
  • For many years clustered heterogeneous web server architecture has been formed on the internet because the explosive internet services and the various quality of requests. The critical point in cluster environment is the mapping schemes of request to server. and recently this is the main issue of internet architecture. The topic of previous mapping methods is to assign equal loads to servers in cluster using the number of requests. But recent growth of various services makes it hard to depend on simple load balancing to satisfy appropriate latency. So mapping based on requested content to decrease response time and to increase cache hit rates on entire servers - so called “content-based” mapping is highly valuated on the internet recently. This paper proposes Prioritized Highest Random Weight mapping(PHRW mapping) that improves content-based mapping to properly fit in the heterogeneous environment. This mapping scheme that assigns requests to the servers with priority, is very effective on heterogeneous web server cluster, especially effective on decreasing latency of reactive data service which has limit on latency. This paper have proved through algorithm and simulation that proposed PHRW mapping show higher-performance by decrease in latency.