• Title/Summary/Keyword: 이웃해 탐색

Search Result 189, Processing Time 0.022 seconds

A Study on the Analysis Effect Factors of Illegal Parking Using Data Mining Techniques (데이터마이닝 기법을 활용한 불법주차 영향요인 분석)

  • Lee, Chang-Hee;Kim, Myung-Soo;Seo, So-Min
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.13 no.4
    • /
    • pp.63-72
    • /
    • 2014
  • With the rapid development in the economy and other fields as well, the standard of living in South Korea has been improved, and consequently, the demand of automobiles has quickly increased. It leads to various traffic issues such as traffic congestion, traffic accident, and parking problem. In particular, this illegal parking caused by the increase in the number of automobiles has been considered one of the main reasons to bring about traffic congestion as intensifying any dispute between neighbors in relation to a parking space, which has been also coming to the fore as a social issue. Therefore, this study looked into Daejeon Metropolitan City, the city that is understood to have the highest automobile sharing rate in South Korea but with relatively few cases of illegal parking crackdowns. In order to investigate the theoretical problems of the illegal parking, this study conducted a decision-making tree model-based Exhaustive CHAID analysis to figure out not only what makes drivers park illegally when they try to park vehicles but also those factors that would tempt the drivers into the illegal parking. The study, then, comes up with solutions to the problem. According to the analysis, in terms of the influential factors that encourage the drivers to park at some illegal areas, it was learned that these factors, the distance, a driver's experience of getting caught, the occupation and the use time in order, have an effect on the drivers' deciding to park illegally. After working on the prediction model, four nodes were finally extracted. Given the analysis result, as a solution to the illegal parking, it is necessary to establish public parking lots additionally and first secure the parking space for the vehicles used for living and working, and to activate the campaign for enhancing illegal parking crackdown and encouraging civic consciousness.

Does the Daily Contact with Older People Alleviate the Implicit and Explicit Ageist Attitude of Children? (노인과의 일상적 접촉이 노인에 대한 어린이의 명시적·암묵적 태도에 미치는 영향)

  • Seok, Minae;Han, Gyoung-hae
    • 한국노년학
    • /
    • v.38 no.3
    • /
    • pp.409-433
    • /
    • 2018
  • The purpose of this study is to examine the effect of contact with the elderly in a daily life on children's ageist attitude. Acknowledging the people's tendency to report in socially appropriate ways to the explicit attitude measurement, implicit measurement is introduced, and relation with the daily contact with elderly(DCE) is analyzed. The research question are as follows: 1) Are these two attitudes explained by different factors? 2) Can DCE alleviate both children's implicit and explicit ageist attitude? 3) How do the contact with grandparent and neighboring elderly affect the children's explicit and implicit ageist attitude? Data was collected from 503 fourth to sixth grade elementary school children. Child-Age Implict Association Test is used to measure implicit ageist attitude. Multinominal logistic analysis and ordered logistic analysis was applied. Followings are the main results: First, explicit and implicit ageist attitudes are found to be related with different predictors. Second, Elderly contact seems to lighten children's ageist attitude overall. Third, the effects of grandparental contact and the neighboring elderly contact on two different ageism were different. While the effect of elderly neighbor contact is limited to the expression of ageism, grandparental contact has a influence not only on the explicit but also on the implicit ageism, even though the effect on implicit attitude is limited in extent. Forth, not the quality of contact but the quantity of it was related to implicit ageist attitude. This result contradicts conventional idea of Intergroup Contact Theory. In the further research, the predictor of implicit ageist attitude need to be throughly examined.

Social Worker's Perceptions and Working Experiences of Older Adults Who Live Alone in Severe Social Isolation Based on the Case of 「Making Friends of Older Adults who Live Alone」 (「독거노인 친구만들기」를 통해 살펴본 '숨겨진 이웃', 사회적 고립이 심각한 노인 1인 가구에 대한 사회복지사의 인식과 경험에 관한 연구)

  • Kim, Yujin
    • 한국노년학
    • /
    • v.38 no.4
    • /
    • pp.1149-1171
    • /
    • 2018
  • The purpose of this study is to increase understanding of the social intervention for severely isolated older adults who live alone and are in serious isolation as if they were 'hidden'. Through qualitative descriptive methods, it intends to describe how social workers in the "Making Friends of Older Adults who Live Alone" project have perceived older adults living alone in serious isolated situation, whether there have been changes in the perception of the elderly according to the progress of the project, and what kinds of experiences these social workers have had while providing case management to older adults. In-depth interviews with 40 social workers, case management records of 70 senior citizens, and research journals were collected and analyzed using qualitative content analysis methods. The results of data analysis were presented in two categories and four subcategories each. Based on the research findings, four kinds of implications were suggested.

Leision Detection in Chest X-ray Images based on Coreset of Patch Feature (패치 특징 코어세트 기반의 흉부 X-Ray 영상에서의 병변 유무 감지)

  • Kim, Hyun-bin;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.35-45
    • /
    • 2022
  • Even in recent years, treatment of first-aid patients is still often delayed due to a shortage of medical resources in marginalized areas. Research on automating the analysis of medical data to solve the problems of inaccessibility for medical services and shortage of medical personnel is ongoing. Computer vision-based medical inspection automation requires a lot of cost in data collection and labeling for training purposes. These problems stand out in the works of classifying lesion that are rare, or pathological features and pathogenesis that are difficult to clearly define visually. Anomaly detection is attracting as a method that can significantly reduce the cost of data collection by adopting an unsupervised learning strategy. In this paper, we propose methods for detecting abnormal images on chest X-RAY images as follows based on existing anomaly detection techniques. (1) Normalize the brightness range of medical images resampled as optimal resolution. (2) Some feature vectors with high representative power are selected in set of patch features extracted as intermediate-level from lesion-free images. (3) Measure the difference from the feature vectors of lesion-free data selected based on the nearest neighbor search algorithm. The proposed system can simultaneously perform anomaly classification and localization for each image. In this paper, the anomaly detection performance of the proposed system for chest X-RAY images of PA projection is measured and presented by detailed conditions. We demonstrate effect of anomaly detection for medical images by showing 0.705 classification AUROC for random subset extracted from the PadChest dataset. The proposed system can be usefully used to improve the clinical diagnosis workflow of medical institutions, and can effectively support early diagnosis in medically poor area.

Development and Evaluation of the PBL Teaching/Learning Process Plan of 'Housing Culture and Practical Space Use' for Home Economics in Middle School (중학교 가정과 문제 중심 '주생활 문화와 주거 공간 활용' 교수·학습 과정안 개발과 평가)

  • Cho, Jiwon;Cho, Jaesoon
    • Journal of Korean Home Economics Education Association
    • /
    • v.32 no.2
    • /
    • pp.59-76
    • /
    • 2020
  • The purpose of this study was to develop and evaluate the teaching/learning process plan of 'housing culture and practical space use' for home economics in middle school according to the problem based learning(PBL) model. The plan consisting of 4-lessons has been developed and implemented following the steps of ADDIE model. Various activity materials (4 scenarios, 6 individual activity sheets, 10 reading texts, and 5 working resources) and visual materials (4 sets of ppt and 4 moving pictures) as well as questionnaire were developed for the 4-session lessons. The plans were implemented to a single class of 21 junior students at H middle school in rural area, Kyeongnam, from 1st to 12th of April, 2019. Students highly enjoyed and were satisfied with the whole 4-lessons in aspects such as understanding of the contents, adequacy of materials and activities, and usefulness in one's own daily life. Additionally, they have more actively participated in the lessons than usual and even interested in learning more of such lessons. Students also reported that they highly accomplished the goal of each lesson as well as overall objectives. They showed interest in the major part of PBL lesson such as scenario and group activities. And they engaged themselves in drawing the share housing space plan with '5D planner' web program which they described as the best part of the lessons. The teaching/learning process plan developed in this study may be used as a theme of maker education, which is emerging these days. It can be concluded that the PBL teaching/learning process plans for 'housing values and practical space use' would contribute to improving students' attitude on living with others and ability to manage one's individual life.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

One-probe P300 based concealed information test with machine learning (기계학습을 이용한 단일 관련자극 P300기반 숨김정보검사)

  • Hyuk Kim;Hyun-Taek Kim
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.49-95
    • /
    • 2024
  • Polygraph examination, statement validity analysis and P300-based concealed information test are major three examination tools, which are use to determine a person's truthfulness and credibility in criminal procedure. Although polygraph examination is most common in criminal procedure, but it has little admissibility of evidence due to the weakness of scientific basis. In 1990s to support the weakness of scientific basis about polygraph, Farwell and Donchin proposed the P300-based concealed information test technique. The P300-based concealed information test has two strong points. First, the P300-based concealed information test is easy to conduct with polygraph. Second, the P300-based concealed information test has plentiful scientific basis. Nevertheless, the utilization of P300-based concealed information test is infrequent, because of the quantity of probe stimulus. The probe stimulus contains closed information that is relevant to the crime or other investigated situation. In tradition P300-based concealed information test protocol, three or more probe stimuli are necessarily needed. But it is hard to acquire three or more probe stimuli, because most of the crime relevant information is opened in investigative situation. In addition, P300-based concealed information test uses oddball paradigm, and oddball paradigm makes imbalance between the number of probe and irrelevant stimulus. Thus, there is a possibility that the unbalanced number of probe and irrelevant stimulus caused systematic underestimation of P300 amplitude of irrelevant stimuli. To overcome the these two limitation of P300-based concealed information test, one-probe P300-based concealed information test protocol is explored with various machine learning algorithms. According to this study, parameters of the modified one-probe protocol are as follows. In the condition of female and male face stimuli, the duration of stimuli are encouraged 400ms, the repetition of stimuli are encouraged 60 times, the analysis method of P300 amplitude is encouraged peak to peak method, the cut-off of guilty condition is encouraged 90% and the cut-off of innocent condition is encouraged 30%. In the condition of two-syllable word stimulus, the duration of stimulus is encouraged 300ms, the repetition of stimulus is encouraged 60 times, the analysis method of P300 amplitude is encouraged peak to peak method, the cut-off of guilty condition is encouraged 90% and the cut-off of innocent condition is encouraged 30%. It was also conformed that the logistic regression (LR), linear discriminant analysis (LDA), K Neighbors (KNN) algorithms were probable methods for analysis of P300 amplitude. The one-probe P300-based concealed information test with machine learning protocol is helpful to increase utilization of P300-based concealed information test, and supports to determine a person's truthfulness and credibility with the polygraph examination in criminal procedure.