• Title/Summary/Keyword: Smart Web

Search Result 808, Processing Time 0.022 seconds

A framework of management for preventing illegal distribution of pdf bookscan file (PDF 형식 북스캔 파일 불법 유통 방지를 위한 관리 프레임워크)

  • Lee, Kuk-Heon;Chung, Hyun-Ji;Ryu, Dae-Gull;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.5
    • /
    • pp.897-907
    • /
    • 2013
  • Since various smart devices are being developed, a growing number of people are reading eBooks instead of paper books. However, people started making eBooks on their own by scanning paper books because there are not enough eBooks provided from market. The term "Bookscan" was made with this reason. The number of bookscan company is increasing because the equipment is too expensive. However, the commercial activity of bookscan company is against copyright law. Also bookscan files are in danger of being illegally distributed on web, because bookscan companies are not protecting copyright. Publication market follows the same procedure with sound market which was collapsed due to copyright problem. Therefore, the technical methods should be prepared for law system against bookscan. The previous ICOP(Illegal Copyrights Obstruction Program) system has been applied to sound and movie files, but not applied to publication. This paper suggests the framework for bookscan file management based on practical mechanism.

A Study on Social Media Sentiment Analysis for Exploring Public Opinions Related to Education Policies (교육정책관련 여론탐색을 위한 소셜미디어 감정분석 연구)

  • Chung, Jin-Myeong;Yoo, Ki-Young;Koo, Chan-Dong
    • Informatization Policy
    • /
    • v.24 no.4
    • /
    • pp.3-16
    • /
    • 2017
  • With the development of social media services in the era of Web 2.0, the public opinion formation site has been partially shifted from the traditional mass media to social media. This phenomenon is continuing to expand, and public opinions on government polices created and shared on social media are attracting more attention. It is particularly important to grasp public opinions in policy formulation because setting up educational policies involves a variety of stakeholders and conflicts. The purpose of this study is to explore public opinions about education-related policies through an empirical analysis of social media documents on education policies using opinion mining techniques. For this purpose, we collected the education policy-related documents by keyword, which were produced by users through the social media service, tokenized and extracted sentimental qualities of the documents, and scored the qualities using sentiment dictionaries to find out public preferences for specific education policies. As a result, a lot of negative public opinions were found regarding the smart education policies that use the keywords of digital textbooks and e-learning; while the software education policies using coding education and computer thinking as the keywords had more positive opinions. In addition, the general policies having the keywords of free school terms and creative personality education showed more negative public opinions. As much as 20% of the documents were unable to extract sentiments from, signifying that there are still a certain share of blog posts or tweets that do not reflect the writers' opinions.

An Enhancing Technique for Scan Performance of a Skip List with MVCC (MVCC 지원 스킵 리스트의 범위 탐색 향상 기법)

  • Kim, Leeju;Lee, Eunji
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.5
    • /
    • pp.107-112
    • /
    • 2020
  • Recently, unstructured data is rapidly being produced based on web-based services. NoSQL systems and key value stores that process unstructured data as key and value pairs are widely used in various applications. In this paper, a study was conducted on a skip list used for in-memory data management in an LSM-tree based key value store. The skip list used in the key value store is an insertion-based skip list that does not allow overwriting and processes all changes only by inserting. This behavior can support Multi-Version Concurrency Control (MVCC), which can simultaneously process multiple read/write requests through snapshot isolation. However, since duplicate keys exist in the skip list, the performance significantly degrades due to unnecessary node visits during a list traverse. In particular, serious overhead occurs when a range query or scan operation that collectively searches a specific range of data occurs. This paper proposes a newly designed Stride SkipList to reduce this overhead. The stride skip list additionally maintains an indexing pointer for the last node of the same key to avoid unnecessary node visits. The proposed scheme is implemented using RocksDB's in-memory component, and the performance evaluation shows that the performance of SCAN operation improves by up to 350 times compared to the existing skip list for various workloads.

Research on Communication and The Operating of Server System for Vehicle Diagnosis and Monitoring (차량진단 및 모니터링을 위한 통신과 서버시스템 운용에 관한 연구)

  • Ryoo, Hee-Soo;Won, Yong-Gwan;Park, Kwon-Chul
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.48 no.6
    • /
    • pp.41-50
    • /
    • 2011
  • This article is concerned with the technology to provide car driver the car's status which are composed of car trouble code in car engine and many sensors. In addition, it installs vehicle diagnostic programs on wireless communication's portable device, for example, Smart phone, PDA, PMP, UMPC. As a result, this is to provide car manager with many information of car sensors when we go to car maintenance. it can monitor relevant information on vehicle by portable device in real time, alert drivers with specific messages and also enable them to address abnormalities immediately. Moreover, the technology could help the drivers who perhaps don't know very well about their vehicles to drive safely and economically as well; the reason is because the whole system is composed of just Vehicle-information collecting device and personal wireless communication's portables and transfers the relating data to server computers through wireless network in order to handle information on vehicles. This technology make us monitor vehicle's running, failure and disorder by using wireless communication's portable device. Finally, this study system is composed of a lot of application to display us the car's status which get car's inner sensor information while driving a car.

Development of a Moving Monitor System for Growing Crops and Environmental Information in Green House (시설하우스 이동형 환경 및 생장 모니터링 시스템 개발)

  • Kim, Ho-Joon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.285-290
    • /
    • 2016
  • In rural area, our farmers confront decreasing benefits owing to imported crops and increased cost. Recently, the government encourage the 6th Industry that merges farming, rural resources, and information and communication technology. Therefor the government makes an investment in supplying 'smart greenhouse' in which a farmer monitor growing crops and environment information to control growing condition. The objective of this study is developing an Moving Monitor and Control System for crops in green House. This system includes a movable sensing unit, a controlling unit, and a server PC unit. The movable sensing unit contains high resolution IP camera, temperature and humidity sensor and WiFi repeater. It rolls on a rail hanging beneath the ceiling of a green house. The controlling unit contains embedded PC, PLC module, WiFi router, and BLDC motor to drive the movable sensing unit. And the server PC unit contains a integrated farm management software and home pages and databases in which the images of crops and environment informations. The movable sensing unit moves widely in a green house and gathers lots of information. The server saves these informations and provides them to customers with the direct commercing web page. This system will help farmers to control house environment and sales their crops in online market. Eventually It will be helpful for farmers to increase their benefits.

A Study on Consumer's Perception and Preference for Providing Information of Fashion Products by Using QR Code (QR 코드를 이용한 패션제품의 정보제공에 대한 20대 소비자의 인식과 선호조사 연구)

  • Yoon, Jiwon;Yoo, Shinjung
    • Science of Emotion and Sensibility
    • /
    • v.22 no.2
    • /
    • pp.59-69
    • /
    • 2019
  • The present study explored consumer's perception and preference on providing information of fashion products by using QR code and suggested the possibility for consumer-to-consumer and consumer-to-company connection. A survey was conducted on males and females in their 20s-a population among whom the rate of smart phone penetration is higher than in any other age group and who tend to exchange information online. The results showed that consumers are dissatisfied with the amount of information, terms of instructions, and ambiguous washing symbols currently provided. Therefore, the study identified the need for better methods of providing information and found that QR code, which is able to deliver high-quality information on fashion products, can be an efficient alternative. Moreover, respondents felt the need for detailed washing instructions, information on handling, and functionality of material on high-involvement fashion products such as outdoor, padding, suit, and underwear worn next to the skin. They also desire styling tips or purchasing information such as SNS OOTD (Outfit Of The Day) utilizing the product, other products that may go well with the one purchased, and similar products on casual wear and coat used on a daily basis. Therefore, QR code used as a link to information web pages or a social network can help consumers to satisfy information needs and to use the products effectively.

A study on Digital Agriculture Data Curation Service Plan for Digital Agriculture

  • Lee, Hyunjo;Cho, Han-Jin;Chae, Cheol-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.171-177
    • /
    • 2022
  • In this paper, we propose a service method that can provide insight into multi-source agricultural data, way to cluster environmental factor which supports data analysis according to time flow, and curate crop environmental factors. The proposed curation service consists of four steps: collection, preprocessing, storage, and analysis. First, in the collection step, the service system collects and organizes multi-source agricultural data by using an OpenAPI-based web crawler. Second, in the preprocessing step, the system performs data smoothing to reduce the data measurement errors. Here, we adopt the smoothing method for each type of facility in consideration of the error rate according to facility characteristics such as greenhouses and open fields. Third, in the storage step, an agricultural data integration schema and Hadoop HDFS-based storage structure are proposed for large-scale agricultural data. Finally, in the analysis step, the service system performs DTW-based time series classification in consideration of the characteristics of agricultural digital data. Through the DTW-based classification, the accuracy of prediction results is improved by reflecting the characteristics of time series data without any loss. As a future work, we plan to implement the proposed service method and apply it to the smart farm greenhouse for testing and verification.

Introducing SEABOT: Methodological Quests in Southeast Asian Studies

  • Keck, Stephen
    • SUVANNABHUMI
    • /
    • v.10 no.2
    • /
    • pp.181-213
    • /
    • 2018
  • How to study Southeast Asia (SEA)? The need to explore and identify methodologies for studying SEA are inherent in its multifaceted subject matter. At a minimum, the region's rich cultural diversity inhibits both the articulation of decisive defining characteristics and the training of scholars who can write with confidence beyond their specialisms. Consequently, the challenges of understanding the region remain and a consensus regarding the most effective approaches to studying its history, identity and future seem quite unlikely. Furthermore, "Area Studies" more generally, has proved to be a less attractive frame of reference for burgeoning scholarly trends. This paper will propose a new tool to help address these challenges. Even though the science of artificial intelligence (AI) is in its infancy, it has already yielded new approaches to many commercial, scientific and humanistic questions. At this point, AI has been used to produce news, generate better smart phones, deliver more entertainment choices, analyze earthquakes and write fiction. The time has come to explore the possibility that AI can be put at the service of the study of SEA. The paper intends to lay out what would be required to develop SEABOT. This instrument might exist as a robot on the web which might be called upon to make the study of SEA both broader and more comprehensive. The discussion will explore the financial resources, ownership and timeline needed to make SEABOT go from an idea to a reality. SEABOT would draw upon artificial neural networks (ANNs) to mine the region's "Big Data", while synthesizing the information to form new and useful perspectives on SEA. Overcoming significant language issues, applying multidisciplinary methods and drawing upon new yields of information should produce new questions and ways to conceptualize SEA. SEABOT could lead to findings which might not otherwise be achieved. SEABOT's work might well produce outcomes which could open up solutions to immediate regional problems, provide ASEAN planners with new resources and make it possible to eventually define and capitalize on SEA's "soft power". That is, new findings should provide the basis for ASEAN diplomats and policy-makers to develop new modalities of cultural diplomacy and improved governance. Last, SEABOT might also open up avenues to tell the SEA story in new distinctive ways. SEABOT is seen as a heuristic device to explore the results which this instrument might yield. More important the discussion will also raise the possibility that an AI-driven perspective on SEA may prove to be even more problematic than it is beneficial.

  • PDF

Development of Greenhouse Cooling and Heating Load Calculation Program Based on Mobile (모바일 기반 온실 냉난방 부하 산정 프로그램 개발)

  • Moon, Jong Pil;Bang, Ji Woong;Hwang, Jeongsu;Jang, Jae Kyung;Yun, Sung Wook
    • Journal of Bio-Environment Control
    • /
    • v.30 no.4
    • /
    • pp.419-428
    • /
    • 2021
  • In order to develope a mobile-based greenhouse energy calculation program, firstly, the overall thermal transmittance of 10 types of major covers and 16 types of insulation materials were measured. In addition, to estimate the overall thermal transmittance when the cover and insulation materials were installed in double or triple layers, 24 combinations of double installations and 59 combinations of triple installations were measured using the hotbox. Also, the overall thermal transmittance value for a single material and the thermal resistance value were used to calculate the overall thermal transmittance value at the time of multi-layer installation of covering and insulating materials, and the linear regression equation was derived to correct the error with the measured values. As a result of developing the model for estimating thermal transmittance when installing multiple layers of coverings and insulating materials based on the value of overall thermal transmittance of a single-material, the model evaluation index was 0.90 (good when it is 0.5 or more), indicating that the estimated value was very close to the actual value. In addition, as a result of the on-site test, it was evaluated that the estimated heat saving rate was smaller than the actual value with a relative error of 2%. Based on these results, a mobile-based greenhouse energy calculation program was developed that was implemented as an HTML5 standard web-based mobile web application and was designed to work with various mobile device and PC browsers with N-Screen support. It had functions to provides the overall thermal transmittance(heating load coefficient) for each combination of greenhouse coverings and thermal insulation materials and to evaluate the energy consumption during a specific period of the target greenhouse. It was estimated that an energy-saving greenhouse design would be possible with the optimal selection of coverings and insulation materials according to the region and shape of the greenhouse.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.