• Title/Summary/Keyword: 기초 존재론

Search Result 119, Processing Time 0.027 seconds

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Automated-Database Tuning System With Knowledge-based Reasoning Engine (지식 기반 추론 엔진을 이용한 자동화된 데이터베이스 튜닝 시스템)

  • Gang, Seung-Seok;Lee, Dong-Joo;Jeong, Ok-Ran;Lee, Sang-Goo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06a
    • /
    • pp.17-18
    • /
    • 2007
  • 데이터베이스 튜닝은 일반적으로 데이터베이스 어플리케이션을 "좀 더 빠르게" 실행하게 하는 일련의 활동을 뜻한다[1]. 데이터베이스 관리자가 튜닝에 필요한 주먹구구식 룰(Rule of thumb)들을 모두 파악 하고 상황에 맞추어 적용하는 것은 비싼 비용과 오랜 시간을 요구한다. 그렇게 때문에 서로 다른 어플 리케이션들이 맞물려 있는 복잡한 서비스는 필수적으로 자동화된 데이터베이스 성능 관리와 튜닝을 필 요로 한다. 본 논문에서는 이를 해결하기 위하여 지식 도매인(Knowledge Domain)을 기초로 한 자동화 된 데이터베이스 튜닝 원칙(Tuning Principle)을 제시하는 시스템을 제안한다. 각각의 데이터베이스 튜닝 이론들은 지식 도매인의 지식으로 활용되며, 성능에 영향을 미치는 요소들을 개체(Object)와 콘셉트 (Concept)로 구성하고 추론 시스템을 통해 튜닝 원칙을 추론하여 쉽고 빠르게 현재 상황에 맞는 튜닝 방법론을 적용시킬 수 있다. 자동화된 데이터베이스 튜닝에 대해 여러 분야에 걸쳐 학문적인 연구가 이루어지고 있다. 그 예로써 Microsoft의 AutoAdmin Project[2], Oracle의 SQL 튜닝 아키텍처[3], COLT[4], DBA Companion[5], SQUASH[6] 등을 들 수 있다. 이러한 최적화 기법들을 각각의 기능적인 방법론에 따라 다시 분류하면 크게 Design Tuning, Logical Structure Tuning, Sentence Tuning, SQL Tuning, Server Tuning, System/Network Tuning으로 나누어 볼 수 있다. 이 중 SQL Tuning 등은 수치적으로 결정되어 이미 존재하는 정보를 이용하기 때문에 구조화된 모델로 표현하기 쉽고 사용자의 다양한 요구에 의해 변화하는 조건들을 수용하기 쉽기 때문에 이에 중점을 두고 성능 문제를 해결하는 데 초점을 맞추었다. 데이터베이스 시스템의 일련의 처리 과정에 따라 DBMS를 구성하는 개체들과 속성, 그리고 연관 관계들이 모델링된다. 데이터베이스 시스템은 Application / Query / DBMS Level의 3개 레벨에 따라 구조화되며, 본 논문에서는 개체, 속성, 연관 관계 및 데이터베이스 튜닝에 사용되는 Rule of thumb들을 분석하여 튜닝 원칙을 포함한 지식의 형태로 변환하였다. 튜닝 원칙은 데이터베이스 시스템에서 발생하는 문제를 해결할 수 있게 하는 일종의 황금률로써 지식 도매인의 바탕이 되는 사실(Fact)과 룰(Rule) 로써 표현된다. Fact는 모델링된 시스템을 지식 도매인의 하나의 지식 개체로 표현하는 방식이고, Rule 은 Fact에 기반을 두어 튜닝 원칙을 지식의 형태로 표현한 것이다. Rule은 다시 시스템 모델링을 통해 사전에 정의되는 Rule와 튜닝 원칙을 추론하기 위해 사용되는 Rule의 두 가지 타업으로 나뉘며, 대부분의 Rule은 입력되는 값에 따라 다른 솔루션을 취하게 하는 분기의 역할을 수행한다. 사용자는 제한적으로 자동 생성된 Fact와 Rule을 통해 튜닝 원칙을 추론하여 데이터베이스 시스템에 적용할 수 있으며, 요구나 필요에 따라 GUI를 통해 상황에 맞는 Fact와 Rule을 수동으로 추가할 수도 었다. 지식 도매인에서 튜닝 원칙을 추론하기 위해 JAVA 기반의 추론 엔진인 JESS가 사용된다. JESS는 스크립트 언어를 사용하는 전문가 시스템[7]으로 선언적 룰(Declarative Rule)을 이용하여 지식을 표현 하고 추론을 수행하는 추론 엔진의 한 종류이다. JESS의 지식 표현 방식은 튜닝 원칙을 쉽게 표현하고 수용할 수 있는 구조를 가지고 있으며 작은 크기와 빠른 추론 성능을 가지기 때문에 실시간으로 처리 되는 어플리케이션 튜닝에 적합하다. 지식 기반 모률의 가장 큰 역할은 주어진 데이터베이스 시스템의 모델을 통하여 필요한 새로운 지식을 생성하고 저장하는 것이다. 이를 위하여 Fact와 Rule은 지식 표현 의 기본 단위인 트리플(Triple)의 형태로 표현된다, 트리플은 Subject, Property, Object의 3가지 요소로 구성되며, 대부분의 Fact와 Rule들은 트리플의 기본 형태 또는 트리플의 조합으로 이루어진 C Condition과 Action의 두 부분의 결합으로 구성된다. 이와 같이 데이터베이스 시스템 모델의 개체들과 속성, 그리고 연관 관계들을 표현함으로써 지식들이 추론 엔진의 Fact와 Rule로 기능할 수 있다. 본 시스템에서는 이를 구현 및 실험하기 위하여 웹 기반 서버-클라이언트 시스템을 가정하였다. 서버는 Process Controller, Parser, Rule Database, JESS Reasoning Engine으로 구성 되 어 있으며, 클라이 언트는 Rule Manager Interface와 Result Viewer로 구성되어 었다. 실험을 통해 얻어지는 튜닝 원칙 적용 전후의 실행 시간 측정 등 데이터베이스 시스템 성능 척도를 비교함으로써 시스템의 효용을 판단하였으며, 실험 결과 적용 전에 비하여 튜닝 원칙을 적용한 경우 최대 1초 미만의 전처리에 따른 부하 시간 추가와 최소 약 1.5배에서 최대 약 3배까지의 처리 시간 개선을 확인하였다. 본 논문에서 제안하는 시스템은 튜닝 원칙을 자동으로 생성하고 지식 형태로 변형시킴으로써 새로운 튜닝 원칙을 파생하여 제공하고, 성능에 영향을 미치는 요소와 함께 직접 Fact과 Rule을 추가함으로써 커스터마이정된 튜닝을 수행할 수 있게 하는 장점을 가진다. 추후 쿼리 자체의 튜닝 및 인텍스 최적화 등의 프로세스 자동화와 Rule을 효율적으로 정의하고 추가하는 방법 그리고 시스템 모델링을 효과적으로 구성하는 방법에 대한 연구를 통해 본 연구를 더욱 개선시킬 수 있을 것이다.

  • PDF

Three meanings implied by Thomas Aquinas' "intellectualism" (토마스 아퀴나스의 '지성주의(주지주의)'가 내포하는 3가지 의미 - 『진리론(이성, 양심과 의식)』을 중심으로 -)

  • Lee, Myung-gon
    • Journal of Korean Philosophical Society
    • /
    • v.148
    • /
    • pp.239-267
    • /
    • 2018
  • In the matter of ethical and moral practice, Thomas Aquinas's thought is called "intellectualism". It does not mean only that intelligence is more important than will in moral practice, but that it has epistemological, metaphysical, and psycho-psychological implications significance. The first means affirming "the first principles of knowing" as the problem of certainty of knowing. In Thomism, there are surely above suspicion notions in the domain of practice as well as in the domain of reason, which are obviously self-evident, and because of that certainty, they become the basis of certainty of all other knowings that follow. The principle to know these knowings is the first principle of knowing, reason and Synderesis(conscience). Therefore, the "intellectualism" of Tomism is the basis for providing the ground of metaphysics. In the case of reason, it is classified into superior reason and inferior reason according to whether it is object. The object of higher reason is "metaphysical object" which human natural reason can not deal with. This affirmation of superior reason provides a basis for human "autonomy" in the moral and religious domain. This is because even in areas beyond the object of natural reason, it is possible to derive certain knowledge through self-reasoning, and thus to be able to carry out the act through their own choosing. Likewise, for Thomas Aquinas, "Synderesi" as the first principle of good and evil judgment can be applied to both the superior reason and the inferior reason, and thus, except for the truth by the direct divine revelation, precedes any authority of the world, scrupulous Act always guarantees truth and good. This means "subjectivity" that virtually in the act of moral practice, it can become the master of one's act. Furthermore, "consciousness(conscientia)", which means the ability to comprehend everything in a holistic and simultaneous manner, is based on conscience(synderesis). So, at least in principle, correct behavior or moral behavior in Tomism is given firstly in correct knowledge. Therefore, it can be said that true awareness (conscious awareness) in Thomas Aquinas's thought coincide with practical practice, or at least knowledge can be said to be a decisive 'driver' for practice. This will be the best explanation of the definition of "intellectualism" by Thomism.

Key Methodologies to Effective Site-specific Accessment in Contaminated Soils : A Review (오염토양의 효과적 현장조사에 대한 주요 방법론의 검토)

  • Chung, Doug-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.32 no.4
    • /
    • pp.383-397
    • /
    • 1999
  • For sites to be investigated, the results of such an investigation can be used in determining foals for cleanup, quantifying risks, determining acceptable and unacceptable risk, and developing cleanup plans t hat do not cause unnecessary delays in the redevelopment and reuse of the property. To do this, it is essential that an appropriately detailed study of the site be performed to identify the cause, nature, and extent of contamination and the possible threats to the environment or to any people living or working nearby through the analysis of samples of soil and soil gas, groundwater, surface water, and sediment. The migration pathways of contaminants also are examined during this phase. Key aspects of cost-effective site assessment to help standardize and accelerate the evaluation of contaminated soils at sites are to provide a simple step-by-step methodology for environmental science/engineering professionals to calculate risk-based, site-specific soil levels for contaminants in soil. Its use may significantly reduce the time it takes to complete soil investigations and cleanup actions at some sites, as well as improve the consistency of these actions across the nation. To achieve the effective site assessment, it requires the criteria for choosing the type of standard and setting the magnitude of the standard come from different sources, depending on many factors including the nature of the contamination. A general scheme for site-specific assessment consists of sequential Phase I, II, and III, which is defined by workplan and soil screening levels. Phase I are conducted to identify and confirm a site's recognized environmental conditions resulting from past actions. If a Phase 1 identifies potential hazardous substances, a Phase II is usually conducted to confirm the absence, or presence and extent, of contamination. Phase II involve the collection and analysis of samples. And Phase III is to remediate the contaminated soils determined by Phase I and Phase II. However, important factors in determining whether a assessment standard is site-specific and suitable are (1) the spatial extent of the sampling and the size of the sample area; (2) the number of samples taken: (3) the strategy of taking samples: and (4) the way the data are analyzed. Although selected methods are recommended, application of quantitative methods is directed by users having prior training or experience for the dynamic site investigation process.

  • PDF

Fermented Extracts of Korean Mistletoe with Lactobacillus (FKM-110) Stimulate Macrophage and Inhibit Tumor Metastasis (유산균으로 발효된 한국산 겨우살이 추출물의 Macrophage 자극에 의한 면역학적 활성화와 종양전이 억제효과)

  • Yoon, Taek-Joon;Yoo, Yung-Choon;Kang, Tae-Bong;Lee, Kwan-Hee;Kwak, Jin-Hwan;Baek, Young-Jin;Huh, Chul-Sung;Kim, Jong-Bae
    • Korean Journal of Food Science and Technology
    • /
    • v.31 no.3
    • /
    • pp.838-847
    • /
    • 1999
  • Based on the results that the extract of Korean mistletoe (KM-110) has immunological and anti-tumor activities and its main component is lectin called KML-U, this study was carried out to investigate the immunostimulatory and anti-tumor activities of FKM-110, fermented KM-110 with lactobacillus, as a basic study for the development of functional food with anti-tumor activity. The amount of lectin after fermentation determined by ELISA was varied with the fermentation time and kinds of lactobacillus. Cytotoxic effects of FKM-110 on the various tumor cells was significant and dependent on the concentration of KML-U and the kinds of lactobacillus. FKM-110 stimulated macrophage and resulted in the secretion of some cytokines such as IL-1 and $IFN-{\gamma}$, but this effect was not correlated with the concentration of lectin. FKM-110 fermented with Marshall Lactobacillus casei showed the most potent antitumor activity in experimental and spontaneous metastasis models. When yoghurt produced with KM-110, Marshall Lactobacillus casei and skim milk was administered orally to mouse, the metastasis of tumor cells was significantly inhibited.

  • PDF

A Processing of Progressive Aspect "te-iru" in Japanese-Korean Machine Translation (일한기계번역에서 진행형 "ている"의 번역처리)

  • Kim, Jeong-In;Mun, Gyeong-Hui;Lee, Jong-Hyeok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.685-692
    • /
    • 2001
  • This paper describes how to disambiguate the aspectual meaning of Japanese expression "-te iru" in Japanese-Korean machine translation Due to grammatical similarities of both languages, almost all Japanese- Korean MT systems have been developed under the direct MT strategy, in which the lexical disambiguation is essential to high-quality translation. Japanese has a progressive aspectual marker “-te iru" which is difficult to translate into Korean equivalents because in Korean there are two different progressive aspectual markers: "-ko issta" for "action progressive" and "-e issta" for "state progressive". Moreover, the aspectual system of both languages does not quite coincide with each other, so the Korean progressive aspect could not be determined by Japanese meaning of " te iru" alone. The progressive aspectural meaning may be parially determined by the meaning of predicates and also the semantic meaning of predicates may be partially reshicted by adverbials, so all Japanese predicates are classified into five classes : the 1nd verb is used only for "action progrssive",2nd verb generally for "action progressive" but occasionally for "state progressive", the 3rd verb only for "state progressive", the 4th verb generally for "state progressive", but occasIonally for "action progressive", and the 5th verb for the others. Some heuristic rules are defined for disambiguation of the 2nd and 4th verbs on the basis of adverbs and abverbial phrases. In an experimental evaluation using more than 15,000 sentances from "Asahi newspapers", the proposed method improved the translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.anslation.

  • PDF

The effects of neonatal ventilator care or maternal chorioamnionitis on the development of bronchopulmonary dysplasia (산모의 융모양막염 및 인공호흡기 치료가 미숙아 만성 폐질환의 발생에 미치는 영향)

  • Yun, Ki-Tae;Lee, Dong-Whan;Lee, Sang-Geel
    • Clinical and Experimental Pediatrics
    • /
    • v.52 no.8
    • /
    • pp.893-897
    • /
    • 2009
  • Purpose : Advances in neonatal intensive care have improved the survival rate of low-birth-weight infants, but mild bronchopulmonary dysplasia (BPD) with the accompanying need for prolonged oxygen supplement remains problematic. Maternal chorioamnionitis and neonatal ventilator care affect the development of BPD. This study aimed to examine whether maternal chorioamnionitis or neonatal ventilator care affect the development of BPD dependently or independently. Methods : We performed a retrospective study of 158 newborn infants below 36 weeks of gestational age and 1,500 gm birth weight admitted to the neonatal intensive care unit of Daegu Fatima Hospital between January 2000 and December 2006. We analyzed the incidence of BPD according to maternal chorioamnionitis and neonatal ventilator care. Result : Histologic chorioamnionitis was observed in 50 of 158 infants (31.6%). There were no significant differences in the development of BPD (P=0.735) between the chorioamnionitis (+) and chorioamnionitis (-) groups. In the multiple regression analysis, ventilator care (OR=7.409, 95% CI=2.532-21.681) and neonatal sepsis (OR=4.897, 95% CI=1.227-19.539) affected the development of BPD rather than maternal chorioamnionitis (OR=0.461, 95% CI=0.201-1.059). Conclusion : Ventilator care or neonatal sepsis may play a role in the development of BPD rather than maternal chorioamnionitis.

Classification of Domestic Freight Data and Application for Network Models in the Era of 'Government 3.0' ('정부 3.0' 시대를 맞이한 국내 화물 자료의 집계 수준에 따른 분류체계 구축 및 네트워크 모형 적용방안)

  • YOO, Han Sol;KIM, Nam Seok
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.4
    • /
    • pp.379-392
    • /
    • 2015
  • Freight flow data in Korea has been collected for a variety of purposes by various organizations. However, since the representation and format of the data varies, it has not been substantially used for freight analyses and furthermore for freight policies. In order to increase the applicability of those data sets, it is required to bring them in a table and compare for finding the differences. Then, it is shown that the raw data can be aggregated by a particular criterion such as mode, origin and destination, and type commodity. This study aims to examine the freight data issue in terms of three different points of view. First, we investigated various freight volume data sets which are released by several organizations. Second, we tried to develop formulations for freight volume data. Third, we discussed how to apply the formulations to network models in which particular OR (Operations Research) techniques are used. The results emphasized that some data might be useless for modeling once they are aggregated. As a result of examining the freight volume data, this study found that 14 organizations share their data sets at various aggregation levels. This study is not an ordinary research article, which normally includes data analysis, because it seems to be impossible to conduct extensive case studies. The reason is that the data dealt in this study are diverse. Nevertheless, this study might guide the research direction in the freight transport research society in terms of data issue. Especially, it can be concluded that this study is a timely research because the governmemt has emphasized the importance of sharing data to public throughout 'government 3.0' for research purpose.

FOI and Government Records Management Reforms under Obama Administration (미국 정보자유제도와 정부기록관리 혁신 오바마 행정부의 정부개방정책을 중심으로)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.35
    • /
    • pp.3-40
    • /
    • 2013
  • Establishment and expansion of a FOI regime is a fundamental basis for modern democracy. Informed decisions and supports by the people are critical to establishment of democratic institutions and policies. The best tool to make informed decisions and to ensure accountability is the FOI. For effective FOI, good records management is necessary requirement. This paper observes and analyses the development of the FOI in the U.S., the Open Government policy, and the government records management reforms under Obama Administration to search viable solutions for Korean FOI and public records management reforms. Major revisions and advancement of the FOIA in the United States are examined, especially the revision of the FOIA as the OPEN Government Act of 2007. The FOIA revision enhanced greatly the freedom of information in the U.S. including the establishment of an independent FOI ombudsman by the Congress. The paper also discusses the Presidential memoranda on the Open Government and the FOI by President Obama, the following directives, Presidential memorandum on government records management and the Government Records Management Directive. Major contents of the directives, plans, and achievement are summarized and analysed. Finally, this paper compares the government records management reforms under former President Roh Mu Hyun with the Obama's reform drive. The comparison found that major difference in the "top-down" government records reforms are the difference in democratic institutions such as weak congressional politics, strong bureaucratic obstacles, and relatively weak social and professional supports for the reforms in Korea, while these reforms were similar in terms that they were driven by insightful political leaders. Independent FOI ombudsman and national records administration are necessary for such democratic reforms.