• Title/Summary/Keyword: mobile system

Search Result 9,568, Processing Time 0.039 seconds

Comparisons of Popularity- and Expert-Based News Recommendations: Similarities and Importance (인기도 기반의 온라인 추천 뉴스 기사와 전문 편집인 기반의 지면 뉴스 기사의 유사성과 중요도 비교)

  • Suh, Kil-Soo;Lee, Seongwon;Suh, Eung-Kyo;Kang, Hyebin;Lee, Seungwon;Lee, Un-Kon
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.191-210
    • /
    • 2014
  • As mobile devices that can be connected to the Internet have spread and networking has become possible whenever/wherever, the Internet has become central in the dissemination and consumption of news. Accordingly, the ways news is gathered, disseminated, and consumed have changed greatly. In the traditional news media such as magazines and newspapers, expert editors determined what events were worthy of deploying their staffs or freelancers to cover and what stories from newswires or other sources would be printed. Furthermore, they determined how these stories would be displayed in their publications in terms of page placement, space allocation, type sizes, photographs, and other graphic elements. In turn, readers-news consumers-judged the importance of news not only by its subject and content, but also through subsidiary information such as its location and how it was displayed. Their judgments reflected their acceptance of an assumption that these expert editors had the knowledge and ability not only to serve as gatekeepers in determining what news was valuable and important but also how to rank its value and importance. As such, news assembled, dispensed, and consumed in this manner can be said to be expert-based recommended news. However, in the era of Internet news, the role of expert editors as gatekeepers has been greatly diminished. Many Internet news sites offer a huge volume of news on diverse topics from many media companies, thereby eliminating in many cases the gatekeeper role of expert editors. One result has been to turn news users from passive receptacles into activists who search for news that reflects their interests or tastes. To solve the problem of an overload of information and enhance the efficiency of news users' searches, Internet news sites have introduced numerous recommendation techniques. Recommendations based on popularity constitute one of the most frequently used of these techniques. This popularity-based approach shows a list of those news items that have been read and shared by many people, based on users' behavior such as clicks, evaluations, and sharing. "most-viewed list," "most-replied list," and "real-time issue" found on news sites belong to this system. Given that collective intelligence serves as the premise of these popularity-based recommendations, popularity-based news recommendations would be considered highly important because stories that have been read and shared by many people are presumably more likely to be better than those preferred by only a few people. However, these recommendations may reflect a popularity bias because stories judged likely to be more popular have been placed where they will be most noticeable. As a result, such stories are more likely to be continuously exposed and included in popularity-based recommended news lists. Popular news stories cannot be said to be necessarily those that are most important to readers. Given that many people use popularity-based recommended news and that the popularity-based recommendation approach greatly affects patterns of news use, a review of whether popularity-based news recommendations actually reflect important news can be said to be an indispensable procedure. Therefore, in this study, popularity-based news recommendations of an Internet news portal was compared with top placements of news in printed newspapers, and news users' judgments of which stories were personally and socially important were analyzed. The study was conducted in two stages. In the first stage, content analyses were used to compare the content of the popularity-based news recommendations of an Internet news site with those of the expert-based news recommendations of printed newspapers. Five days of news stories were collected. "most-viewed list" of the Naver portal site were used as the popularity-based recommendations; the expert-based recommendations were represented by the top pieces of news from five major daily newspapers-the Chosun Ilbo, the JoongAng Ilbo, the Dong-A Daily News, the Hankyoreh Shinmun, and the Kyunghyang Shinmun. In the second stage, along with the news stories collected in the first stage, some Internet news stories and some news stories from printed newspapers that the Internet and the newspapers did not have in common were randomly extracted and used in online questionnaire surveys that asked the importance of these selected news stories. According to our analysis, only 10.81% of the popularity-based news recommendations were similar in content with the expert-based news judgments. Therefore, the content of popularity-based news recommendations appears to be quite different from the content of expert-based recommendations. The differences in importance between these two groups of news stories were analyzed, and the results indicated that whereas the two groups did not differ significantly in their recommendations of stories of personal importance, the expert-based recommendations ranked higher in social importance. This study has importance for theory in its examination of popularity-based news recommendations from the two theoretical viewpoints of collective intelligence and popularity bias and by its use of both qualitative (content analysis) and quantitative methods (questionnaires). It also sheds light on the differences in the role of media channels that fulfill an agenda-setting function and Internet news sites that treat news from the viewpoint of markets.

Carbon Monoxide Dispersion in an Urban Area Simulated by a CFD Model Coupled to the WRF-Chem Model (WRF-Chem 모델과 결합된 CFD 모델을 활용한 도시 지역의 일산화탄소 확산 연구)

  • Kwon, A-Rum;Park, Soo-Jin;Kang, Geon;Kim, Jae-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.679-692
    • /
    • 2020
  • We coupled a CFD model to the WRF-Chem model (WRF-CFD model) and investigated the characteristics of flows and carbon monoxide (CO) distributions in a building-congested district. We validated the simulated results against the measured wind speeds, wind directions, and CO concentrations. The WRF-Chem model simulated the winds from southwesterly to southeasterly, overestimating the measured wind speeds. The statistical validation showed that the WRF-CFD model simulated the measured wind speeds more realistically than the WRF-Chem model. The WRF-Chem model significantly underestimated the measured CO concentrations, and the WRF-CFD model improved the CO concentration prediction. Based on the statistical validation results, the WRF-CFD model improved the performance in predicting the CO concentrations by taking complicatedly distributed buildings and mobiles sources of CO into account. At 04 KST on May 22, there was a downdraft around the AQMS, and airflow with a relatively low CO concentration was advected from the upper layer. Resultantly, the CO concentration was lower at the AQMS than the surrounding area. At 15 KST on May 22, there was an updraft around the AQMS. This resulted in a slightly higher CO concentration than the surroundings. The WRF-CFD model transported CO emitted from the mobile sources to the AQMS measurement altitude, well reproducing the measured CO concentration. At 18 KST on May 22, the WRF-CFD model simulated high CO concentrations because of high CO emission, broad updraft area, and an increase in turbulent diffusion cause by wind-shear increase near the ground.

Review on Usefulness of EPID (Electronic Portal Imaging Device) (EPID (Electronic Portal Imaging Device)의 유용성에 관한 고찰)

  • Lee, Choong Won;Park, Do Keun;Choi, A Hyun;Ahn, Jong Ho;Song, Ki Weon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2013
  • Purpose: Replacing the film which used to be used for checking the set-up of the patient and dosimetry during radiation therapy, more and more EPID equipped devices are in use at present. Accordingly, this article tried to evaluated the accuracy of the position check-up and the usefulness of dosimetry during the use of an electronic portal imaging device. Materials and Methods: On 50 materials acquired with the search of Korea Society Radiotherapeutic Technology, The Korean Society for Radiation Oncology, and Pubmed using "EPID", "Portal dosimetry", "Portal image", "Dose verification", "Quality control", "Cine mode", "Quality - assurance", and "In vivo dosimetry" as indexes, the usefulness of EPID was analyzed by classifying them as history of EPID and dosimetry, set-up verification and characteristics of EPID. Results: EPID is developed from the first generation of Liquid-filled ionization chamber, through the second generation of Camera-based fluoroscopy, and to the third generation of Amorphous-silicon EPID imaging modes can be divided into EPID mode, Cine mode and Integrated mode. When evaluating absolute dose accuracy of films and EPID, it was found that EPID showed within 1% and EDR2 film showed within 3% errors. It was confirmed that EPID is better in error measurement accuracy than film. When gamma analyzing the dose distribution of the base exposure plane which was calculated from therapy planning system, and planes calculated by EDR2 film and EPID, both film and EPID showed less than 2% of pixels which exceeded 1 at gamma values (r%>1) with in the thresholds such as 3%/3 mm and 2%/2 mm respectively. For the time needed for full course QA in IMRT to compare loads, EDR2 film recorded approximately 110 minutes, and EPID recorded approximately 55 minutes. Conclusion: EPID could easily replace conventional complicated and troublesome film and ionization chamber which used to be used for dosimetry and set-up verification, and it was proved to be very efficient and accurate dosimetry device in quality assurance of IMRT (intensity modulated radiation therapy). As cine mode imaging using EPID allows locating tumors in real-time without additional dose in lung and liver which are mobile according to movements of diaphragm and in rectal cancer patients who have unstable position, it may help to implement the most optimal radiotherapy for patients.

  • PDF

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

An Embedding /Extracting Method of Audio Watermark Information for High Quality Stereo Music (고품질 스테레오 음악을 위한 오디오 워터마크 정보 삽입/추출 기술)

  • Bae, Kyungyul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.21-35
    • /
    • 2018
  • Since the introduction of MP3 players, CD recordings have gradually been vanishing, and the music consuming environment of music users is shifting to mobile devices. The introduction of smart devices has increased the utilization of music through music playback, mass storage, and search functions that are integrated into smartphones and tablets. At the time of initial MP3 player supply, the bitrate of the compressed music contents generally was 128 Kbps. However, as increasing of the demand for high quality music, sound quality of 384 Kbps appeared. Recently, music content of FLAC (Free License Audio Codec) format using lossless compression method is becoming popular. The download service of many music sites in Korea has classified by unlimited download with technical protection and limited download without technical protection. Digital Rights Management (DRM) technology is used as a technical protection measure for unlimited download, but it can only be used with authenticated devices that have DRM installed. Even if music purchased by the user, it cannot be used by other devices. On the contrary, in the case of music that is limited in quantity but not technically protected, there is no way to enforce anyone who distributes it, and in the case of high quality music such as FLAC, the loss is greater. In this paper, the author proposes an audio watermarking technology for copyright protection of high quality stereo music. Two kinds of information, "Copyright" and "Copy_free", are generated by using the turbo code. The two watermarks are composed of 9 bytes (72 bits). If turbo code is applied for error correction, the amount of information to be inserted as 222 bits increases. The 222-bit watermark was expanded to 1024 bits to be robust against additional errors and finally used as a watermark to insert into stereo music. Turbo code is a way to recover raw data if the damaged amount is less than 15% even if part of the code is damaged due to attack of watermarked content. It can be extended to 1024 bits or it can find 222 bits from some damaged contents by increasing the probability, the watermark itself has made it more resistant to attack. The proposed algorithm uses quantization in DCT so that watermark can be detected efficiently and SNR can be improved when stereo music is converted into mono. As a result, on average SNR exceeded 40dB, resulting in sound quality improvements of over 10dB over traditional quantization methods. This is a very significant result because it means relatively 10 times improvement in sound quality. In addition, the sample length required for extracting the watermark can be extracted sufficiently if the length is shorter than 1 second, and the watermark can be completely extracted from music samples of less than one second in all of the MP3 compression having a bit rate of 128 Kbps. The conventional quantization method can extract the watermark with a length of only 1/10 compared to the case where the sampling of the 10-second length largely fails to extract the watermark. In this study, since the length of the watermark embedded into music is 72 bits, it provides sufficient capacity to embed necessary information for music. It is enough bits to identify the music distributed all over the world. 272 can identify $4*10^{21}$, so it can be used as an identifier and it can be used for copyright protection of high quality music service. The proposed algorithm can be used not only for high quality audio but also for development of watermarking algorithm in multimedia such as UHD (Ultra High Definition) TV and high-resolution image. In addition, with the development of digital devices, users are demanding high quality music in the music industry, and artificial intelligence assistant is coming along with high quality music and streaming service. The results of this study can be used to protect the rights of copyright holders in these industries.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

Nuclear Terrorism and Global Initiative to Combat Nuclear Terrorism(GICNT): Threats, Responses and Implications for Korea (핵테러리즘과 세계핵테러방지구상(GICNT): 위협, 대응 및 한국에 대한 함의)

  • Yoon, Tae-Young
    • Korean Security Journal
    • /
    • no.26
    • /
    • pp.29-58
    • /
    • 2011
  • Since 11 September 2001, warnings of risk in the nexus of terrorism and nuclear weapons and materials which poses one of the gravest threats to the international community have continued. The purpose of this study is to analyze the aim, principles, characteristics, activities, impediments to progress and developmental recommendation of the Global Initiative to Combat Nuclear Terrorism(GICNT). In addition, it suggests implications of the GICNT for the ROK policy. International community will need a comprehensive strategy with four key elements to accomplish the GICNT: (1) securing and reducing nuclear stockpiles around the world, (2) countering terrorist nuclear plots, (3) preventing and deterring state transfers of nuclear weapons or materials to terrorists, (4) interdicting nuclear smuggling. Moreover, other steps should be taken to build the needed sense of urgency, including: (1) analysis and assessment through joint threat briefing for real nuclear threat possibility, (2) nuclear terrorism exercises, (3) fast-paced nuclear security reviews, (4) realistic testing of nuclear security performance to defeat insider or outsider threats, (5) preparing shared database of threats and incidents. As for the ROK, main concerns are transfer of North Korea's nuclear weapons, materials and technology to international terror groups and attacks on nuclear facilities and uses of nuclear devices. As the 5th nuclear country, the ROK has strengthened systems of physical protection and nuclear counterterrorism based on the international conventions. In order to comprehensive and effective prevention of nuclear terrorism, the ROK has to strengthen nuclear detection instruments and mobile radiation monitoring system in airports, ports, road networks, and national critical infrastructures. Furthermore, it has to draw up effective crisis management manual and prepare nuclear counterterrorism exercises and operational postures. The fundamental key to the prevention, detection and response to nuclear terrorism which leads to catastrophic impacts is to establish not only domestic law, institution and systems, but also strengthen international cooperation.

  • PDF

Handover Functional Architecture for Next Generation Wireless Networks (차세대 무선 네트워크를 위한 핸드오버 기능 구조 제안)

  • Baek, Joo-Young;Kim, Dong-Wook;Kim, Hyun-Jin;Choi, Yoon-Hee;Kim, Duk-Jin;Kim, Woo-Jae;Suh, Young-Joo;Kang, Suk-Yang;Kim, Kyung-Suk;Shin, Kyung-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.268-273
    • /
    • 2006
  • 차세대 무선 네트워크 (4G)는 새로운 무선 접속 기술의 개발과 함께 많은 연구가 필요한 분야이다. 그 중에서 특히 단말의 끊김없는 이동성을 제공해 주기 위한 핸드오버 기술이 가장 중요하다고 할 수 있다. 차세대 무선 네트워크는 새로운 무선 접속 기술과 함께 기존의 무선랜이나 이동통신망 등과 같이 사용될 것으로 예상되며, 네트워크 계층에서의 이동성 지원을 위하여 Mobile IPv6를 사용할 것으로 예상되는 네트워크이다. 이러한 네트워크에서 끊김없는 이동성을 제공해 주기 위해서는 현재까지 연구된 핸드오버 기능 및 구조에 대한 연구와 함께 보다 다양해진 네트워크 환경과 QoS 등을 고려한 종합적인 핸드오버 기능에 대한 연구가 필요하다. 본 논문에서는 차세대 무선 네트워크에서 단말의 끊김없는 핸드오버를 제공해 주기 위하여 필요한 기능들을 도출하고, 이들간의 유기적인 연관관계를 정의하여 다양한 네트워크 환경과 사용자의 우선순위, 어플리케이션의 QoS 요구 조건 등을 고려한 종합적인 핸드오버 기능 구조를 제안하고자 한다. 제안하는 핸드오버 구조는 Monitoring, Triggering, Handover의 세 가지 module로 나뉘어져 있으며, 각각은 필요에 따라 sub-module로 다시 세분화된다. 제안하는 핸드오버 구조의 가장 큰 특징은 핸드오버를 유발시킬 수 있는 여러 가지 요소를 종합적으로 고려하며 이들간의 수평적인 비교가 아닌 다단계 비교를 수행하여 보다 정확한 triggering이 가능하도록 한다. 또한 단말의 QoS 요구 사항을 보장하고 네트워크의 혼잡도(congestion) 및 부하 조절 (load balancing)을 위한 기능을 핸드오버 기능에 추가하여 효율적인 네트워크의 자원 사용이 가능하도록 설계하였다.서버로 분산처리하게 함으로써 성능에 대한 신뢰성을 향상 시킬 수 있는 Load Balancing System을 제안한다.할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate 영상과 비교하여 더 빠른 시간 안에 우수한 영상을 얻을 수 있었다. 그러므로 $^{99m}Tc$-transierrin이 감염 병소의 영상진단에 사용될 수 있을 것으로 기대된다.리를 정량화 하였다. 특히 선

  • PDF

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

Inhibitory Effects of Ethanolic Extracts from Aster glehni on Xanthine Oxidase and Content Determination of Bioactive Components Using HPLC-UV (섬쑥부쟁이 에탄올 추출물의 잔틴산화효소 저해 효능 및 HPLC-UV를 이용한 유효성분의 함량 분석)

  • Kang, Dong Hyeon;Han, Eun Hye;Jin, Changbae;Kim, Hyoung Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.45 no.11
    • /
    • pp.1610-1616
    • /
    • 2016
  • This study aimed to establish an optimal extraction process and high performance liquid chromatography-ultraviolet (HPLC-UV) analytical method for determination of 3,5-dicaffeoylquinic acid (3,5-DCQA) as a part of materials standardization for the development of a xanthine oxidase inhibitor as a health functional food. The quantitative determination method of 3,5-DCQA as a marker compound was optimized by HPLC analysis using a Luna RP-18 column, and the correlation coefficient for the calibration curve showed good linearity of more than 0.9999 using a gradient eluent of water (1% acetic acid) and methanol as the mobile phase at a flow rate of 1.0 mL/min and a detection wavelength of 320 nm. The HPLC-UV method was applied successfully to quantification of the marker compound (3,5-DCQA) in Aster glehni extracts after validation of the method with linearity, accuracy, and precision. Ethanolic extracts of A. glehni (AGEs) were evaluated by reflux extraction at 70 and $80^{\circ}C$ with 30, 50, 70, and 80% ethanol for 3, 4, 5, and 6 h, respectively. Among AGEs, 70% AGE at $70^{\circ}C$ showed the highest content of 3,5-DCQA of $52.59{\pm}3.45mg/100g$ A. glehni. Furthermore, AGEs were analyzed for their inhibitory activities on uric acid production by the xanthine/xanthine oxidase system. The 70% AGE at $70^{\circ}C$ showed the most potent inhibitory activity with $IC_{50}$ values of $77.01{\pm}3.13{\sim}89.96{\pm}3.08{\mu}g/mL$. The results suggest that standardization of 3,5-DCQA in AGEs using HPLC-UV analysis would be an acceptable method for the development of health functional foods.