• Title/Summary/Keyword: 논문 리뷰

Search Result 468, Processing Time 0.028 seconds

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

A Study on the Visual Representation of TREC Text Documents in the Construction of Digital Library (디지털도서관 구축과정에서 TREC 텍스트 문서의 시각적 표현에 관한 연구)

  • Jeong, Ki-Tai;Park, Il-Jong
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.3
    • /
    • pp.1-14
    • /
    • 2004
  • Visualization of documents will help users when they do search similar documents. and all research in information retrieval addresses itself to the problem of a user with an information need facing a data source containing an acceptable solution to that need. In various contexts. adequate solutions to this problem have included alphabetized cubbyholes housing papyrus rolls. microfilm registers. card catalogs and inverted files coded onto discs. Many information retrieval systems rely on the use of a document surrogate. Though they might be surprise to discover it. nearly every information seeker uses an array of document surrogates. Summaries. tables of contents. abstracts. reviews, and MARC recordsthese are all document surrogates. That is, they stand infor a document allowing a user to make some decision regarding it. whether to retrieve a book from the stacks, whether to read an entire article, etc. In this paper another type of document surrogate is investigated using a grouping method of term list. lising Multidimensional Scaling Method (MDS) those surrogates are visualized on two-dimensional graph. The distances between dots on the two-dimensional graph can be represented as the similarity of the documents. More close the distance. more similar the documents.

Emotion-on-a-chip(EOC) : Evolution of biochip technology to measure human emotion (감성 진단칩(Emotion-on-a-chip, EOC) : 인간 감성측정을 위한 바이오칩기술의 진화)

  • Jung, Hyo-Il;Kihl, Tae-Suk;Hwang, Yoo-Sun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.157-164
    • /
    • 2011
  • Emotion science is one of the rapidly expanding engineering/scientific disciplines which has a major impact on human society. Such growing interests in emotion science and engineering owe the recent trend that various academic fields are being merged. In this paper we propose the potential importance of the biochip technology in which the human emotion can be precisely measured in real time using body fluids such as blood, saliva and sweat. We firstly and newly name such a biochip an Emotion-On-a-Chip (EOC). EOC consists of biological markers to measure the emotion, electrode to acquire the signal, transducer to transfer the signal and display to show the result. In particular, microfabrication techniques made it possible to construct nano/micron scale sensing parts/chips to accommodate the biological molecules to capture the emotional bio-markers and gave us a new opportunities to investigate the emotion precisely. Future developments in the EOC techniques will be able to help combine the social sciences and natural sciences, and consequently expand the scope of studies.

  • PDF

An Optimized V&V Methodology to Improve Quality for Safety-Critical Software of Nuclear Power Plant (원전 안전-필수 소프트웨어의 품질향상을 위한 최적화된 확인 및 검증 방안)

  • Koo, Seo-Ryong;Yoo, Yeong-Jae
    • Journal of the Korea Society for Simulation
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2015
  • As the use of software is more wider in the safety-critical nuclear fields, so study to improve safety and quality of the software has been actively carried out for more than the past decade. In the nuclear power plant, nuclear man-machine interface systems (MMIS) performs the function of the brain and neural networks of human and consists of fully digitalized equipments. Therefore, errors in the software for nuclear MMIS may occur an abnormal operation of nuclear power plant, can result in economic loss due to the consequential trip of the nuclear power plant. Verification and validation (V&V) is a software-engineering discipline that helps to build quality into software, and the nuclear industry has been defined by laws and regulations to implement and adhere to a through verification and validation activities along the software lifecycle. V&V is a collection of analysis and testing activities across the full lifecycle and complements the efforts of other quality-engineering functions. This study propose a methodology based on V&V activities and related tool-chain to improve quality for software in the nuclear power plant. The optimized methodology consists of a document evaluation, requirement traceability, source code review, and software testing. The proposed methodology has been applied and approved to the real MMIS project for Shin-Hanul units 1&2.

Enhanced Grid-Based Trajectory Cloaking Method for Efficiency Search and User Information Protection in Location-Based Services (위치기반 서비스에서 효율적 검색과 사용자 정보보호를 위한 향상된 그리드 기반 궤적 클로킹 기법)

  • Youn, Ji-Hye;Song, Doo-Hee;Cai, Tian-Yuan;Park, Kwang-Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.8
    • /
    • pp.195-202
    • /
    • 2018
  • With the development of location-based applications such as smart phones and GPS navigation, active research is being conducted to protect location and trajectory privacy. To receive location-related services, users must disclose their exact location to the server. However, disclosure of users' location exposes not only their locations but also their trajectory to the server, which can lead to concerns of privacy violation. Furthermore, users request from the server not only location information but also multimedia information (photographs, reviews, etc. of the location), and this increases the processing cost of the server and the information to be received by the user. To solve these problems, this study proposes the EGTC (Enhanced Grid-based Trajectory Cloaking) technique. As with the existing GTC (Grid-based Trajectory Cloaking) technique, EGTC method divides the user trajectory into grids at the user privacy level (UPL) and creates a cloaking region in which a random query sequence is determined. In the next step, the necessary information is received as index by considering the sub-grid cell corresponding to the path through which the user wishes to move as c(x,y). The proposed method ensures the trajectory privacy as with the existing GTC method while reducing the amount of information the user must listen to. The excellence of the proposed method has been proven through experimental results.

A Study on Web Mining System for Real-Time Monitoring of Opinion Information Based on Web 2.0 (의견정보 모니터링을 위한 웹 마이닝 시스템에 관한 연구)

  • Joo, Hae-Jong;Hong, Bong-Hwa;Jeong, Bok-Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.149-157
    • /
    • 2010
  • As the use of the Internet has recently increased, the demand for opinion information posted on the Internet has grown. However, such resources only exist on the website. People who want to search for information on the Internet find it inconvenient to visit each website. This paper focuses on the opinion information extraction and analysis system through Web mining that is based on statistics collected from Web contents. That is, users' opinion information which is scattered across several websites can be automatically analyzed and extracted. The system provides the opinion information search service that enables users to search for real-time positive and negative opinions and check their statistics. Also, users can do real-time search and monitoring about other opinion information by putting keywords in the system. Proposed technologies proved to have outstanding capabilities in comparison to existing ones through tests. The capabilities to extract positive and negative opinion information were assessed. Specifically, test movie review sentence testing data was tested and its results were analyzed.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Improvement of a Product Recommendation Model using Customers' Search Patterns and Product Details

  • Lee, Yunju;Lee, Jaejun;Ahn, Hyunchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.265-274
    • /
    • 2021
  • In this paper, we propose a novel recommendation model based on Doc2vec using search keywords and product details. Until now, a lot of prior studies on recommender systems have proposed collaborative filtering (CF) as the main algorithm for recommendation, which uses only structured input data such as customers' purchase history or ratings. However, the use of unstructured data like online customer review in CF may lead to better recommendation. Under this background, we propose to use search keyword data and product detail information, which are seldom used in previous studies, for product recommendation. The proposed model makes recommendation by using CF which simultaneously considers ratings, search keywords and detailed information of the products purchased by customers. To extract quantitative patterns from these unstructured data, Doc2vec is applied. As a result of the experiment, the proposed model was found to outperform the conventional recommendation model. In addition, it was confirmed that search keywords and product details had a significant effect on recommendation. This study has academic significance in that it tries to apply the customers' online behavior information to the recommendation system and that it mitigates the cold start problem, which is one of the critical limitations of CF.

Applying a smart livestock system as a development strategy for the animal life industry in the future: A review (미래 동물생명산업 발전전략으로써 스마트축산의 응용: 리뷰)

  • Park, Sang-O
    • Journal of the Korean Applied Science and Technology
    • /
    • v.38 no.1
    • /
    • pp.241-262
    • /
    • 2021
  • This paper reviewed the necessity of a information and communication technology (ICT)-based smart livestock system as a development strategy for the animal life industry in the future. It also predicted the trends of livestock and animal food until 2050, 30 years later. Worldwide, livestock raising and consumption of animal food are rapidly changing in response to population growth, aging, reduction of agriculture population, urbanization, and income growth. Climate change can change the environment and livestock's productivity and reproductive efficiencies. Livestock production can lead to increased greenhouse gas emissions, land degradation, water pollution, animal welfare, and human health problems. To solve these issues, there is a need for a preemptive future response strategy to respond to climate change, improve productivity, animal welfare, and nutritional quality of animal foods, and prevent animal diseases using ICT-based smart livestock system fused with the 4th industrial revolution in various aspects of the animal life industry. The animal life industry of the future needs to integrate automation to improve sustainability and production efficiency. In the digital age, intelligent precision animal feeding with IoT (internet of things) and big data, ICT-based smart livestock system can collect, process, and analyze data from various sources in the animal life industry. It is composed of a digital system that can precisely remote control environmental parameters inside and outside the animal husbandry. The ICT-based smart livestock system can also be used for monitoring animal behavior and welfare, and feeding management of livestock using sensing technology for remote control through the Internet and mobile phones. It can be helpful in the collection, storage, retrieval, and dissemination of a wide range of information that farmers need. It can provide new information services to farmers.

A review on urban inundation modeling research in South Korea: 2001-2022 (도시침수 모의 기술 국내 연구동향 리뷰: 2001-2022)

  • Lee, Seungsoo;Kim, Bomi;Choi, Hyeonjin;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.707-721
    • /
    • 2022
  • In this study, a state-of-the-art review on urban inundation simulation technology was presented summarizing major achievements and limitations, and future research recommendations and challenges. More than 160 papers published in major domestic academic journals since the 2000s were analyzed. After analyzing the core themes and contents of the papers, the status of technological development was reviewed according to simulation methodologies such as physically-based and data-driven approaches. In addition, research trends for application purposes and advances in overseas and related fields were analyzed. Since more than 60% of urban inundation research used Storm Water Management Model (SWMM), developing new modeling techniques for detailed physical processes of dual drainage was encouraged. Data-based approaches have become a new status quo in urban inundation modeling. However, given that hydrological extreme data is rare, balanced research development of data and physically-based approaches was recommended. Urban inundation analysis technology, actively combined with new technologies in other fields such as artificial intelligence, IoT, and metaverse, would require continuous support from society and holistic approaches to solve challenges from climate risk and reduce disaster damage.