• Title/Summary/Keyword: Personal Information Classifying

Search Result 34, Processing Time 0.021 seconds

An Inference System Using BIG5 Personality Traits for Filtering Preferred Resource

  • Jong-Hyun, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • In the IoT environment, various objects mutually interactive, and various services can be composed based on this environment. In the previous study, we have developed a resource collaboration system to provide services by substituting limited resources in the user's personal device using resource collaboration. However, in the preceding system, when the number of resources and situations increases, the inference time increases exponentially. To solve this problem, this study proposes a method of classifying users and resources by applying the BIG5 user type classification model. In this paper, we propose a method to reduce the inference time by filtering the user's preferred resources through BIG5 type-based preprocessing and using the filtered resources as an input to the recommendation system. We implement the proposed method as a prototype system and show the validation of our approach through performance and user satisfaction evaluation.

Classifying Midair Collision Risk in Airspace Using ADS-B and Mode-S Open-source Data (ADS-B와 Mode-S 오픈소스 데이터를 활용한 공중충돌 위험 양상 분류)

  • Jongboo Kim;Dooyoul Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.5
    • /
    • pp.552-560
    • /
    • 2023
  • Aircraft midair collisions are dangerous events that can cause massive casualties. To prevent this, civil aviation has mandated the installation of TCAS (ACAS), which is becoming more sophisticated with the help of new technologies. However, there are institutional problems in collecting data for TCAS research in Korea, limiting the ability to obtain data for personal research. ADS-B and Mode-S automatic broadcast various information about the flight status of the aircraft. This data also contains information about TCAS RA, which can be used by anyone to find examples of TCAS RA operation. We used the databases of ADS-B Exchange and Opensky-Network to acquire data and visually represent three TCAS RA cases through Python coding. We also identified domestic TCAS cases in the first half of 2023 and analyzed their characteristics to confirm the usefulness of the data.

Diagnostic Usefulness of N-Terminal Probrain-type Natriuretic Peptide to Detect Congestive Heart Failure Patients (울혈성 심부전 환자에서 N-Terminal Probrain-type Natriuretic Peptide의 진단적 유용성)

  • Son, Gye-Sung
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.88-95
    • /
    • 2005
  • Even though the echocardiograph has been recognized as the method of choice among various diagnostic tools to detect congestive heart failure (CHF), there were some limitations in relation to the consumption of time, labor and process. We analyzed results of N-terminal probrain-type natriuretic peptide (NT-proBNP) and various parameters of the echocardiographic findings to clarify the diagnostic usefulness of NT-proBNP in detecting patients with CHF. We analyzed the sera from total of 242 cases from in-patients and out-patients, which were requested from the cardiovascular section of department of Internal Medicine at Chungnam National University Hospital from March 2003 to May 2004. The procedures were performed in order as shown below; sampling, NT-proBNP analysis, data acquisition and data analysis. All data including personal information and echocardiographic findings ware acquired by medical record review. When classifying the study population into six groups according to the degree of left ventricular ejection fraction (LVEF), the serum level of NT-proBNP was higher in the group with 51-60% of LVEF (P=0.023). There were low correlation between the serum level of NT-proBNP and various parameters of the echocardiographic findings with LVESD (r=0.1513), LVEDD (r=0.0831), LVEF (r=0.2035), IVST (r=0.03) and LVPWT (r=0.0728), respectively. When comparing NT-proBNP with atrial and/or ventricular enlargement, the patient group with both left atrial and left ventricular enlargement (p=0.186) or only left atrial (p=0.105) or only left ventricular enlargement (p=0.256) showed higher level of NT-proBNP without statistical significance than patient group with no enlargement. Searching the optimal cutoff of the serum level of NT-proBNP, the sensitivity (98.9%) and the specificity (100%) was highest at the cutoff of 300 pg/mL than any other cutoffs. These findings suggested that the analysis of NT-proBNP in serum might detect the patients with CHF earlier than with the echocardiograph, especially in patients with asymptomatic or mild symptomatic CHF. In conclusion, NT-proBNP test was proved to be clinically useful to diagnose CHF patients.

  • PDF

A Classification and Selection Method of Emotion Based on Classifying Emotion Terms by Users (사용자의 정서 단어 분류에 기반한 정서 분류와 선택 방법)

  • Rhee, Shin-Young;Ham, Jun-Seok;Ko, Il-Ju
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.97-104
    • /
    • 2012
  • Recently, a big text data has been produced by users, an opinion mining to analyze information and opinion about users is becoming a hot issue. Of the opinion mining, especially a sentiment analysis is a study for analysing emotions such as a positive, negative, happiness, sadness, and so on analysing personal opinions or emotions for commercial products, social issues and opinions of politician. To analyze the sentiment analysis, previous studies used a mapping method setting up a distribution of emotions using two dimensions composed of a valence and arousal. But previous studies set up a distribution of emotions arbitrarily. In order to solve the problem, we composed a distribution of 12 emotions through carrying out a survey using Korean emotion words list. Also, certain emotional states on two dimension overlapping multiple emotions, we proposed a selection method with Roulette wheel method using a selection probability. The proposed method shows to classify a text into emotion extracting emotion terms from a text.

  • PDF

A Study on eGovFrame Security Analysis and Countermeasures (eGovFrame 보안 분석 및 대응 방안에 관한 연구)

  • Joong-oh Park
    • Journal of Industrial Convergence
    • /
    • v.21 no.3
    • /
    • pp.181-188
    • /
    • 2023
  • The e-Government standard framework provides overall technologies such as reuse of common components for web environment development such as domestic government/public institutions, connection of standard modules, and resolution of dependencies. However, in a standardized development environment, there is a possibility of updating old versions according to core versions and leakage of personal and confidential information due to hacking or computer viruses. This study directly analyzes security vulnerabilities focusing on websites that operate eGovFrame in Korea. As a result of analyzing/classifying vulnerabilities at the internal programming language source code level, five items associated with representative security vulnerabilities could be extracted again. As a countermeasure against this, the security settings and functions through the 2 steps (1st and 2nd steps) and security policy will be explained. This study aims to improve the security function of the e-government framework and contribute to the vitalization of the service.

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.

A Study on Shot Segmentation and Indexing of Language Education Videos by Content-based Visual Feature Analysis (교육용 어학 영상의 내용 기반 특징 분석에 의한 샷 구분 및 색인에 대한 연구)

  • Han, Heejun
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.1
    • /
    • pp.219-239
    • /
    • 2017
  • As IT technology develops rapidly and the personal dissemination of smart devices increases, video material is especially used as a medium of information transmission among audiovisual materials. Video as an information service content has become an indispensable element, and it has been used in various ways such as unidirectional delivery through TV, interactive service through the Internet, and audiovisual library borrowing. Especially, in the Internet environment, the information provider tries to reduce the effort and cost for the processing of the provided information in view of the video service through the smart device. In addition, users want to utilize only the desired parts because of the burden on excessive network usage, time and space constraints. Therefore, it is necessary to enhance the usability of the video by automatically classifying, summarizing, and indexing similar parts of the contents. In this paper, we propose a method of automatically segmenting the shots that make up videos by analyzing the contents and characteristics of language education videos and indexing the detailed contents information of the linguistic videos by combining visual features. The accuracy of the semantic based shot segmentation is high, and it can be effectively applied to the summary service of language education videos.

PIRS : Personalized Information Retrieval System using Adaptive User Profiling and Real-time Filtering for Search Results (적응형 사용자 프로파일기법과 검색 결과에 대한 실시간 필터링을 이용한 개인화 정보검색 시스템)

  • Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.21-41
    • /
    • 2010
  • This paper proposes a system that can serve users with appropriate search results through real time filtering, and implemented adaptive user profiling based personalized information retrieval system(PIRS) using users' implicit feedbacks in order to deal with the problem of existing search systems such as Google or MSN that does not satisfy various user' personal search needs. One of the reasons that existing search systems hard to satisfy various user' personal needs is that it is not easy to recognize users' search intentions because of the uncertainty of search intentions. The uncertainty of search intentions means that users may want to different search results using the same query. For example, when a user inputs "java" query, the user may want to be retrieved "java" results as a computer programming language, a coffee of java, or a island of Indonesia. In other words, this uncertainty is due to ambiguity of search queries. Moreover, if the number of the used words for a query is fewer, this uncertainty will be more increased. Real-time filtering for search results returns only those results that belong to user-selected domain for a given query. Although it looks similar to a general directory search, it is different in that the search is executed for all web documents rather than sites, and each document in the search results is classified into the given domain in real time. By applying information filtering using real time directory classifying technology for search results to personalization, the number of delivering results to users is effectively decreased, and the satisfaction for the results is improved. In this paper, a user preference profile has a hierarchical structure, and consists of domains, used queries, and selected documents. Because the hierarchy structure of user preference profile can apply the context when users perfomed search, the structure is able to deal with the uncertainty of user intentions, when search is carried out, the intention may differ according to the context such as time or place for the same query. Furthermore, this structure is able to more effectively track web documents search behaviors of a user for each domain, and timely recognize the changes of user intentions. An IP address of each device was used to identify each user, and the user preference profile is continuously updated based on the observed user behaviors for search results. Also, we measured user satisfaction for search results by observing the user behaviors for the selected search result. Our proposed system automatically recognizes user preferences by using implicit feedbacks from users such as staying time on the selected search result and the exit condition from the page, and dynamically updates their preferences. Whenever search is performed by a user, our system finds the user preference profile for the given IP address, and if the file is not exist then a new user preference profile is created in the server, otherwise the file is updated with the transmitted information. If the file is not exist in the server, the system provides Google' results to users, and the reflection value is increased/decreased whenever user search. We carried out some experiments to evaluate the performance of adaptive user preference profile technique and real time filtering, and the results are satisfactory. According to our experimental results, participants are satisfied with average 4.7 documents in the top 10 search list by using adaptive user preference profile technique with real time filtering, and this result shows that our method outperforms Google's by 23.2%.

A Study on Spam Document Classification Method using Characteristics of Keyword Repetition (단어 반복 특징을 이용한 스팸 문서 분류 방법에 관한 연구)

  • Lee, Seong-Jin;Baik, Jong-Bum;Han, Chung-Seok;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.315-324
    • /
    • 2011
  • In Web environment, a flood of spam causes serious social problems such as personal information leak, monetary loss from fishing and distribution of harmful contents. Moreover, types and techniques of spam distribution which must be controlled are varying as days go by. The learning based spam classification method using Bag-of-Words model is the most widely used method until now. However, this method is vulnerable to anti-spam avoidance techniques, which recent spams commonly have, because it classifies spam documents utilizing only keyword occurrence information from classification model training process. In this paper, we propose a spam document detection method using a characteristic of repeating words occurring in spam documents as a solution of anti-spam avoidance techniques. Recently, most spam documents have a trend of repeating key phrases that are designed to spread, and this trend can be used as a measure in classifying spam documents. In this paper, we define six variables, which represent a characteristic of word repetition, and use those variables as a feature set for constructing a classification model. The effectiveness of proposed method is evaluated by an experiment with blog posts and E-mail data. The result of experiment shows that the proposed method outperforms other approaches.

A Study on the Feature Point Extraction Methodology based on XML for Searching Hidden Vault Anti-Forensics Apps (은닉형 Vault 안티포렌식 앱 탐색을 위한 XML 기반 특징점 추출 방법론 연구)

  • Kim, Dae-gyu;Kim, Chang-soo
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2022
  • General users who use smartphone apps often use the Vault app to protect personal information such as photos and videos owned by individuals. However, there are increasing cases of criminals using the Vault app function for anti-forensic purposes to hide illegal videos. These apps are one of the apps registered on Google Play. This paper proposes a methodology for extracting feature points through XML-based keyword frequency analysis to explore Vault apps used by criminals, and text mining techniques are applied to extract feature points. In this paper, XML syntax was compared and analyzed using strings.xml files included in the app for 15 hidden Vault anti-forensics apps and non-hidden Vault apps, respectively. In hidden Vault anti-forensics apps, more hidden-related words are found at a higher frequency in the first and second rounds of terminology processing. Unlike most conventional methods of static analysis of APK files from an engineering point of view, this paper is meaningful in that it approached from a humanities and sociological point of view to find a feature of classifying anti-forensics apps. In conclusion, applying text mining techniques through XML parsing can be used as basic data for exploring hidden Vault anti-forensics apps.