• Title/Summary/Keyword: 구글 검색

Search Result 178, Processing Time 0.026 seconds

An Efficient Extended Query Suggestion System Using the Analysis of Users' Query Patterns (사용자 질의패턴 분석을 이용한 효율적인 확장검색어 추천시스템)

  • Kim, Young-An;Park, Gun-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7C
    • /
    • pp.619-626
    • /
    • 2012
  • With the service suggesting additional extended or related query, search engines aim to provide their users more convenience. The extended or related query suggestion service based on popularity, or by how many people have searched on web using the query, has limitations to elevate users' satisfaction, because each user's preference and interests differ. This paper will demonstrate the design and realization of the system that suggests extended query appropriate for users' demands, and also an improvement in the computing process between entering the first search word and the subsequent extension to the related themes. According to the evaluation the proposed system suggested 41% more extended or related query than when searching on Google, and 48% more than on Yahoo. Also by improving the shortcomings of the extended or related query system based on general popularity rather than each user's preference, the new system enhanced users' convenience further.

An Empirical Study on the Marketing Performance of e-Trade using Search Engine Optimization (검색엔진 최적화(SEO) 기법을 활용한 전자무역 마케팅 성과에 관한 실증연구)

  • Lee, Sang-Jin;Chung, Jason
    • International Commerce and Information Review
    • /
    • v.13 no.1
    • /
    • pp.3-28
    • /
    • 2011
  • Recently marketing methods of small and medium exporting firms have changed from internet marketing using homepage, or e-catalogs, to search engine marketing. However, there is no specific proof of search engine marketing effectiveness. Therefore the purpose of this research is to explore marketing performance of search engine marketing(SEM) based on search engine optimization. In order to build an optimal SEM strategy, quantitative data are collected from the Google-analytics such as homepage visits, page views, and traffic source for three years. At the same time, this study has carried out a survey to measure the qualitative effectiveness. The result of this quantitative study suggests that the existing carryover effects and lag effects would be maintained through search engine optimization. Meanwhile, the qualitative survey shows that satisfaction and awareness of homepage have been improved after search engine optimization. This can support logically increase of homepage visiting ratio of quantitative analysis. Also exporting companies know very well, that traffic and page views have increased after search engine optimization.

  • PDF

A Qualitative Study of Physicians' Use of Clinical Information Resources and Barriers (임상의사의 진료목적 정보원 이용과 장애요인에 관한 질적 연구)

  • Kim, Soon;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.50 no.4
    • /
    • pp.55-75
    • /
    • 2016
  • We analyzed the characteristics of the physicians preferred information sources and barriers through in-depth interviews. Information searches for patient treatment were subdivided into deciding patient treatment methods, understanding the latest treatment trends, and preparing presentation materials for conferences. The variables that affected the search behaviors were identified as being background knowledge on the topic, clinical experience, job title, search skills, user training, and familiarity with the library homepage. PubMed was the most preferred choice because of users' familiarity, reliability, and the vastness of information; Google was also used frequently for easy access and fast search result. The accuracy and the recentness of information were the most significant criteria. Easy interface and convenient access were also considered important due to physicians' time constraints. Searching obstacles were divided into difficulty of searching system, unfamiliar term, too vast resources, difficulty to get fulltext articles and complex advanced search features. The results of this study can be utilized as a basis for improving information service of library and curriculum development for physicians.

음성인식기술을 활용한 VTS 자동 기록 프로그램 개발의 필요성

  • Park, Min-Gyeong;Kim, Myeong-Su;Lee, Sang-Rok;Heo, Yeong-Gwan
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2015.07a
    • /
    • pp.314-315
    • /
    • 2015
  • 최근 음성인식기술이 눈부시게 발전하여 여러 분야에 걸쳐 폭넓게 활용되고 있는 추세에 맞추어, 음성으로 관제의 대부분을 시행하는 VTS에 적용하고자 하였다. 선박 사고 뿐만 아니라, 기타 선박 비리나 정보 공개 요청 등 여러 분야에서 활용할 수 있는 관제내용을 보다 객관적이고 정확하게 기록하고자 VTS 자동 기록 프로그램을 개발하고자 한다.

  • PDF

Multi-purpose smart mirror including CCTV function (CCTV 기능을 포함한 다용도 스마트 미러)

  • Lee, Tea-Nam
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.863-865
    • /
    • 2022
  • 본 프로젝트는 시간, 날씨, 미세먼지 농도, 캘린더, 뉴스 등을 포함한 기본적인 생활정보를 스마트 미러에 디스플레이 해주며 추가적으로 구글 어시스턴트를 활용해 음성인식으로 유튜브 재생, 인터넷 검색 등 다양한 기능을 내재하고 있다. 아울러 인체 감지 센서를 이용해 움직임이 감지되지 않으면 절전모드로 동작하다 움직임이 감지하면 일반 모드로 동작한다. 마지막으로 CCTV 기능을 내재하고 있어 CCTV 화면을 웹 애플리케이션을 통해 실시간 스트리밍 하며 사람 얼굴이 감지될 시 화면을 녹화하는 기능을 포함하고 있다.

Estimating Coverage of the Web Search Services Using Near-Uniform Sampling of Web Documents (균등한 웹 문서 샘플링을 이용한 웹 검색 서비스들의 커버리지 측정)

  • Jang, Sung-Soo;Kim, Kwang-Hyun;Lee, Joon-Ho
    • The KIPS Transactions:PartD
    • /
    • v.15D no.3
    • /
    • pp.305-312
    • /
    • 2008
  • Web documents with useful information are widely available on the internet and they are accessible with web search service. For this reason, web search services study better ways to collect more web documents, but have a difficulty figuring out the coverage of these web pages. This paper is intended to find ways to evaluate the current coverage assessment methods and suggest more effective coverage assessment technique that is, sampling internet web documents equally, monitoring how they are classified on web search services, in an attempt to assess both absolute and relative coverage of the web search engines. The paper also presents the comparison among Korean web search services using the suggested methods.the absolute and relative coverage was highest in Google followed by Naver and Empas. The result is expected to help estimating coverage of web search services.

Predicting the Number of Confirmed COVID-19 Cases Using Deep Learning Models with Search Term Frequency Data (검색어 빈도 데이터를 반영한 코로나 19 확진자수 예측 딥러닝 모델)

  • Sungwook Jung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.387-398
    • /
    • 2023
  • The COVID-19 outbreak has significantly impacted human lifestyles and patterns. It was recommended to avoid face-to-face contact and over-crowded indoor places as much as possible as COVID-19 spreads through air, as well as through droplets or aerosols. Therefore, if a person who has contacted a COVID-19 patient or was at the place where the COVID-19 patient occurred is concerned that he/she may have been infected with COVID-19, it can be fully expected that he/she will search for COVID-19 symptoms on Google. In this study, an exploratory data analysis using deep learning models(DNN & LSTM) was conducted to see if we could predict the number of confirmed COVID-19 cases by summoning Google Trends, which played a major role in surveillance and management of influenza, again and combining it with data on the number of confirmed COVID-19 cases. In particular, search term frequency data used in this study are available publicly and do not invade privacy. When the deep neural network model was applied, Seoul (9.6 million) with the largest population in South Korea and Busan (3.4 million) with the second largest population recorded lower error rates when forecasting including search term frequency data. These analysis results demonstrate that search term frequency data plays an important role in cities with a population above a certain size. We also hope that these predictions can be used as evidentiary materials to decide policies, such as the deregulation or implementation of stronger preventive measures.

A Study on Removal Request of Exposed Personal Information (노출된 개인정보의 삭제 요청에 관한 연구)

  • Jung, Bo-Reum;Jang, Byeong-Wook;Kim, In-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.37-42
    • /
    • 2015
  • Although online search engine service provide a convenient means to search for information on the World Wide Web, it also poses a risk of disclosing privacy. Regardless of such risk, most of users are neither aware of their personal information being exposed on search results nor how to redress the issue by requesting removal of information. According to the 2015 parliamentary inspection of government offices, many government agencies were criticized for mishandling of personal information and its leakage on online search engine such as Google. Considering the fact that the personal information leakage via online search engine has drawn the attention at the government level, the online search engine and privacy issue needs to be rectified. This paper, by examining current online search engines, studies the degree of personal information exposure on online search results and its underlying issues. Lastly, based on research result, the paper provides a sound policy and direction to the removal of exposed personal information with respect to search engine service provider and user respectively.

An Anti-Phishing Approach based on Search Engine (검색 엔진 기반의 안티 피싱 기법)

  • Lee, Min-Soo;Lee, Hyeong-Gyu;Yoon, Hyun-Soo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06d
    • /
    • pp.121-124
    • /
    • 2010
  • 피싱은 인터넷을 이용한 일종의 사기 수법이다. 피싱을 방지하기 위하여 웹 브라우저 밴더에서는 블랙리스트 기반의 피싱 탐지 기술을 제공하고 있다. 또한 기계학습을 이용한 피싱 탐지 기법들이 제안되어 피싱 공격에 대응을 하고 있다. 하지만, 피싱 공격이 진화 함에 따라 기존의 기술들이 탐지 못하는 경우가 발생을 한다. 피싱 페이지가 생성된 후 일정 시간이 지나지 않을 경우는 기존의 리스트 기반의 솔루션이 탐지를 하지 못하며, 이미지 기반의 피싱 공격의 경우 기존의 연구들이 탐지 하지 못한다. 이에 본 연구에서는 구글 검색엔진을 이용하여 이러한 문제점들을 해결하는 방안을 제안한다.

  • PDF

For creating a Dataset Image URI and Metadata Collection Web Crawler (데이터셋 생성을 위한 이미지 URI 및 메타데이터 수집 크롤러)

  • Park, June-Hong;Kim, Seok-Jin;Jung, Yeon-Uk;Lee, Dong-Uk;Jeong, YoungJu;Seo, Dong-Mahn
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1082-1084
    • /
    • 2019
  • 인공지능 학습에 대한 관심이 증가하면서 학습에 필요한 데이터셋 구축에 필요한 많은 양의 데이터가 필요하다. 데이터셋 구축에 필요한 데이터들을 효과적으로 수집하기 위한 키워드 기반 웹크롤러를 제안한다. 구글 검색 API 를 기반으로 웹 크롤러를 설계하였으며 사용자가 입력한 키워드를 바탕으로 이미지의 URI 와 메타데이터를 지속적으로 수집하는 크롤러이다. 수집한 URI 와 메타데이터는 데이터베이스를 통해 관리한다. 향후 다른 검색 API 에서도 동작하고 다중 쓰레드를 활용하여 크롤링하는 속도를 높일 예정이다.