• Title/Summary/Keyword: 사용자 정보

Search Result 25,059, Processing Time 0.049 seconds

Study of Geological Log Database for Public Wells, Jeju Island (제주도 공공 관정 지질주상도 DB 구축 소개)

  • Pak, Song-Hyon;Koh, Giwon;Park, Junbeom;Moon, Dukchul;Yoon, Woo Seok
    • Economic and Environmental Geology
    • /
    • v.48 no.6
    • /
    • pp.509-523
    • /
    • 2015
  • This study introduces newly implemented geological well logs database for Jeju public water wells, built for a research project focusing on integrated hydrogeology database of Jeju Island. A detailed analysis of the existing 1,200 Jeju Island geological logs for the public wells developed since 1970 revealed six major indications to be improved for their use in Jeju geological logs DB construction: (1) lack of uniformity in rock name classification, (2) poor definitions of pyroclastic deposits and sand and gravel layers, (3) lack of well borehole aquifer information, (4) lack of information on well screen installation in many water wells, (5) differences by person in geological logging descriptions. A new Jeju geological logs DB enabling standardized input and output formats has been implemented to overcome the above indications by reestablishing the names of Jeju volcanic and sedimentary rocks and utilizing a commercial, database-based input structured, geological log program. The newly designed database structure in geological log program enables users to store a large number of geology, well drilling, and test data at the standardized DB input structure. Also, well borehole groundwater and aquifer test data can be easily added without modifying the existing database structure. Thus, the newly implemented geological logs DB could be a standardized DB for a large number of Jeju existing public wells and new wells to be developed in the future at Jeju Island. Also, the new geological logs DB will be a basis for ongoing project 'Developing GIS-based integrated interpretation system for Jeju Island hydrogeology'.

A Study on the Analysis of Difference between IT and Non-IT Companies on the Consumer Dispute Resolution System's Continuous Use Intention -Focusing on Korean Small and Medium Enterprises (소비자 분쟁처리시스템 지속사용의도에 대하여 IT기업과 비IT기업 간의 차이분석에 관한 연구 -한국 중소기업을 중심으로)

  • Jung, Soo-Yong;Shin, Yong-tae;Han, Jeong-Hoon;Lee, Sung-Hoon
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.203-212
    • /
    • 2017
  • This research analyzed the factors that have the influences on the intentions to use the consumer dispute settlement system for the small- and medium-sized corporations. The consumer dispute settlement system is a general Internet information portal service which enables the small- and medium-sized corporations and the small businesses receive the support for the accurate damage handling method and the legal service through the Internet in their disputes with the black consumers or the consumers. With the small- and medium-sized corporation users who use the consumer dispute settlement system as the subjects, the research took a lot at what influences the consumer dispute settlement system has on the quality of the information, the quality of the system, the ease-of-use regarding which the environmental factors are perceived, and the ease that was perceived and, finally, what influences it has on the intention of the use. The accuracy, the convenience, and the costs of the consumer dispute settlement system had the positive influences on the ease-of-use that was perceived and the accuracy and the convenience, also had the positive influences on the usefulness that was perceived. Also, it was verified that the ease-of-use of the consumer dispute settlement system that was perceived and the usefulness of use of the consumer dispute settlement system that was perceived finally had the positive influence relationships with the intention of the use. It is highly expected that if, based on the results of this research, the quality of the consumer dispute settlement system is maintained and supplemented to fit the priority order, there will be the maintenance of, and the development toward, a system that is even more improved than the previously existent system.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

A study on security independent behavior in social game using expanded health belief model (건강신념모델을 확장한 소셜게임(Social Game) 보안의지행동에 관한 연구)

  • Ahn, Ho-Jeong;Kim, Sung-Jun;Kwon, Do-Soon
    • Management & Information Systems Review
    • /
    • v.35 no.2
    • /
    • pp.99-118
    • /
    • 2016
  • With the development of Internet and popularization of smartphones over recent years, social network services are experiencing rapid growth. On top of this, smartphone gaming market is showing a rapid growth and the use of mobile social games is on the significant rise. The occurrence of game data manipulation targeting these services and personal information leakage is highlighting the importance of social gaming security. This study is intended to propose development plans effective and efficient in social game services by figuring out factors putting effects on security dependent behavior of social game users in Korea and carrying out a practical study on the casual relationship between factors influencing security dependent behavior through recognized behavioral control and attitudes for privacy infringement of these factors. To do this, proposed was a study model in which the HBM(Health Belief Model) allowing the social game user to influence security dependent behavior was expanded and applied as a major variable. To verify the study model of this study practically, a survey was conducted among university students in Seoul-based K University and S University who had experienced using social game services. According to the study findings, firstly, the perceived seriousness turned out to provide positive influence to trust. But, the perceived seriousness turned out not to put positive effects on self-efficacy. Secondly, the perceived probability turned out not to put positive effects on self-efficacy and trust. Thirdly, the perceived gain turned out to put positive effects on self-efficacy and trust. Fourthly, the perceived disorder turned out not to put positive effects on self-efficacy and trust. Fifthly, self-efficacy turned out to put positive effects on trust. But, self-efficacy turned out not to put positive effects on security dependent behavior. Sixthly, trust turned out not to put positive effects on security dependent behavior. This study is intended to make a strategic proposal so that social game users can raise awareness of their level of security perception and security willingness through this.

  • PDF

Analysis on the Cooling Efficiency of High-Performance Multicore Processors according to Cooling Methods (기계식 쿨링 기법에 따른 고성능 멀티코어 프로세서의 냉각 효율성 분석)

  • Kang, Seung-Gu;Choi, Hong-Jun;Ahn, Jin-Woo;Park, Jae-Hyung;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.1-11
    • /
    • 2011
  • Many researchers have studied on the methods to improve the processor performance. However, high integrated semiconductor technology for improving the processor performance causes many problems such as battery life, high power density, hotspot, etc. Especially, as hotspot has critical impact on the reliability of chip, thermal problems should be considered together with performance and power consumption when designing high-performance processors. To alleviate the thermal problems of processors, there have been various researches. In the past, mechanical cooling methods have been used to control the temperature of processors. However, up-to-date microprocessors causes severe thermal problems, resulting in increased cooling cost. Therefore, recent studies have focused on architecture-level thermal-aware design techniques than mechanical cooling methods. Even though architecture-level thermal-aware design techniques are efficient for reducing the temperature of processors, they cause performance degradation inevitably. Therefore, if the mechanical cooling methods can manage the thermal problems of processors efficiently, the performance can be improved by reducing the performance degradation due to architecture-level thermal-aware design techniques such as dynamic thermal management. In this paper, we analyze the cooling efficiency of high-performance multicore processors according to mechanical cooling methods. According to our experiments using air cooler and liquid cooler, the liquid cooler consumes more power than the air cooler whereas it reduces the temperature more efficiently. Especially, the cost for reducing $1^{\circ}C$ is varied by the environments. Therefore, if the mechanical cooling methods can be used appropriately, the temperature of high-performance processors can be managed more efficiently.

On-Line Determination Steady State in Simulation Output (시뮬레이션 출력의 안정상태 온라인 결정에 관한 연구)

  • 이영해;정창식;경규형
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1996.05a
    • /
    • pp.1-3
    • /
    • 1996
  • 시뮬레이션 기법을 이용한 시스템의 분석에 있어서 실험의 자동화는 현재 많은 연구와 개발이 진행 중인 분야이다. 컴퓨터와 정보통신 시스템에 대한 시뮬레이션의 예를 들어 보면, 수많은 모델을 대한 시뮬레이션을 수행할 경우 자동화된 실험의 제어가 요구되고 있다. 시뮬레이션 수행회수, 수행길이, 데이터 수집방법 등과 관련하여 시뮬레이션 실험방법이 자동화가 되지 않으면, 시뮬레이션 실험에 필요한 시간과 인적 자원이 상당히 커지게 되며 출력데이터에 대한 분석에 있어서도 어려움이 따르게 된다. 시뮬레이션 실험방법을 자동화하면서 효율적인 시뮬레이션 출력분석을 위해서는 시뮬레이션을 수행하는 경우에 항상 발생하는 초기편의 (initial bias)를 제거하는 문제가 선결되어야 한다. 시뮬레이션 출력분석에 사용되는 데이터들이 초기편의를 반영하지 않는 안정상태에서 수집된 것이어야만 실제 시스템에 대한 올바른 해석이 가능하다. 실제로 시뮬레이션 출력분석과 관련하여 가장 중요하면서도 어려운 문제는 시뮬레이션의 출력데이터가 이루는 추계적 과정 (stochastic process)의 안정상태 평균과 이 평균에 대한 신뢰구간(confidence interval: c. i.)을 구하는 것이다. 한 신뢰구간에 포함되어 있는 정보는 의사결정자에게 얼마나 정확하게 평균을 추정할 구 있는지 알려 준다. 그러나, 신뢰구간을 구성하는 일은 하나의 시뮬레이션으로부터 얻어진 출력데이터가 일반적으로 비정체상태(nonstationary)이고 자동상관(autocorrelated)되어 있기 때문에, 전통적인 통계적인 기법을 직접적으로 이용할 수 없다. 이러한 문제를 해결하기 위해 시뮬레이션 출력데이터 분석기법이 사용된다.본 논문에서는 초기편의를 제거하기 위해서 필요한 출력데이터의 제거시점을 찾는 새로운 기법으로, 유클리드 거리(Euclidean distance: ED)를 이용한 방법과 현재 패턴 분류(pattern classification) 문제에 널리 사용 중인 역전파 신경망(backpropagation neural networks: BNN) 알고리듬을 이용하는 방법을 제시한다. 이 기법들은 대다수의 기존의 기법과는 달리 시험수행(pilot run)이 필요 없으며, 시뮬레이션의 단일수행(single run) 중에 제거시점을 결정할 수 있다. 제거시점과 관련된 기존 연구는 다음과 같다. 콘웨이방법은 현재의 데이터가 이후 데이터의 최대값이나 최소값이 아니면 이 데이터를 제거시점으로 결정하는데, 알고기듬 구조상 온라인으로 제거시점 결정이 불가능하다. 콘웨이방법이 알고리듬의 성격상 온라인이 불가능한 반면, 수정콘웨이방법 (Modified Conway Rule: MCR)은 현재의 데이터가 이전 데이터와 비교했을 때 최대값이나 최소값이 아닌 경우 현재의 데이터를 제거시점으로 결정하기 때문에 온라인이 가능하다. 평균교차방법(Crossings-of-the-Mean Rule: CMR)은 누적평균을 이용하면서 이 평균을 중심으로 관측치가 위에서 아래로, 또는 아래서 위로 교차하는 회수로 결정한다. 이 기법을 사용하려면 교차회수를 결정해야 하는데, 일반적으로 결정된 교차회수가 시스템에 상관없이 일반적으로 적용가능하지 않다는 문제점이 있다. 누적평균방법(Cumulative-Mean Rule: CMR2)은 여러 번의 시험수행을 통해서 얻어진 출력데이터에 대한 총누적평균(grand cumulative mean)을 그래프로 그린 다음, 안정상태인 점을 육안으로 결정한다. 이 방법은 여러 번의 시뮬레이션을 수행에서 얻어진 데이터들의 평균들에 대한 누적평균을 사용하기 매문에 온라인 제거시점 결정이 불가능하며, 작업자가 그래프를 보고 임의로 결정해야 하는 단점이 있다. Welch방법(Welch's Method: WM)은 브라운 브리지(Brownian bridge) 통계량()을 사용하는데, n이 무한에 가까워질 때, 이 브라운 브리지 분포(Brownian bridge distribution)에 수렴하는 성질을 이용한다. 시뮬레이션 출력데이터를 가지고 배치를 구성한 후 하나의 배치를 표본으로 사용한다. 이 기법은 알고리듬이 복잡하고, 값을 추정해야 하는 단점이 있다. Law-Kelton방법(Law-Kelton's Method: LKM)은 회귀 (regression)이론에 기초하는데, 시뮬레이션이 종료된 후 누적평균데이터에 대해서 회귀직선을 적합(fitting)시킨다. 회귀직선의 기울기가 0이라는 귀무가설이 채택되면 그 시점을 제거시점으로 결정한다. 일단 시뮬레이션이 종료된 다음, 데이터가 모아진 순서의 반대 순서로 데이터를 이용하기 때문에 온라인이 불가능하다. Welch절차(Welch's Procedure: WP)는 5회이상의 시뮬레이션수행을 통해 수집한 데이터의 이동평균을 이용해서 시각적으로 제거시점을 결정해야 하며, 반복제거방법을 사용해야 하기 때문에 온라인 제거시점의 결정이 불가능하다. 또한, 한번에 이동할 데이터의 크기(window size)를 결정해야 한다. 지금까지 알아 본 것처럼, 기존의 방법들은 시뮬레이션의 단일 수행 중의 온라인 제거시점 결정의 관점에서는 미약한 면이 있다. 또한, 현재의 시뮬레이션 상용소프트웨어는 작업자로 하여금 제거시점을 임의로 결정하도록 하기 때문에, 실험중인 시스템에 대해서 정확하고도 정량적으로 제거시점을 결정할 수 없게 되어 있다. 사용자가 임의로 제거시점을 결정하게 되면, 초기편의 문제를 효과적으로 해결하기 어려울 뿐만 아니라, 필요 이상으로 너무 많은 양을 제거하거나 초기편의를 해결하지 못할 만큼 너무 적은 양을 제거할 가능성이 커지게 된다. 또한, 기존의 방법들의 대부분은 제거시점을 찾기 위해서 시험수행이 필요하다. 즉, 안정상태 시점만을 찾기 위한 시뮬레이션 수행이 필요하며, 이렇게 사용된 시뮬레이션은 출력분석에 사용되지 않기 때문에 시간적인 손실이 크게 된다.

  • PDF

Study of system using load cell for real time weight sensing of artificial incubator (인공부화기의 실시간 중량감지를 위한 로드셀을 이용한 시스템 연구)

  • jeong, Jin-hyoung;Kim, Ae-kyung;Lee, Sang-Sik
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.2
    • /
    • pp.144-149
    • /
    • 2018
  • The eggs are incubated for 18 days through the generator and incubated in the developing incubator. During the developmental period, the weight loss of the fetus is correlated with the ventricular formation, and the proper ventricular formation is also associated with the healthy embryonic hatching and the egg hatching rate. However, in the incubator period of the domestic hatchery, it is a reality to acquire the resultant side by the Iranian standard weight measurement with the experience of the hatchery and the person concerned and the development period without the apparatus for measuring the present weight. As a result, prevalence of early mortality, hunger and illness during hatching are frequent. Monitoring the reduction of weaning weight is crucial to obtaining chick quality and hatching performance with weight changes within the development machine. Water loss is different depending on the size of eggs, egg shell, and elder group. We can expect to increase the hatching rate by measuring the weight change in real time and optimizing the ventilation change accordingly. There is a need to develop a real-time measurement system that can control 10 to 13% reduction of the total weight during hatching. The system through this study is a way to check the one - time directly when moving the existing egg, and it is impossible to control the measurement of the fetal water evaporation within the development period. Unlike systems that do not affect the hatching rate, four load cells are connected in parallel on the Arduino sketch board and the AT-command command is used to connect the mobile phone and computer in real time. The communication speed of Bluetooth was set to 15200 to match the communication speed of Arduino and Hyper-terminal program. The real - time monitoring system was designed to visually check the change of the weight of the fetus in the artificial incubator. In this way, we aimed to improve the hatching rate and health condition of the hatching eggs.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

Development of an accreditation system for dietary and nutrition related education resources (영양.식생활 교육자료의 인증 시스템 개발 연구)

  • Kim, Ji-Myung;Lee, Kyoung Ae;Park, Yoo Kyoung;Lee, Kyung-Hea;Oh, Sang Woo;Lee, Hee Seung
    • Journal of Nutrition and Health
    • /
    • v.47 no.2
    • /
    • pp.145-156
    • /
    • 2014
  • Purpose: The purpose of this study was to establish accreditation systems of reliable educational materials for nutrition and dietary life which could be used in schools, workplace, and health promotion. Methods: The study was conducted from April 2011 to October 2011. Literature reviews, institutional visits, and telephone interviews were conducted. Expert meetings and advisory councils were held in order to receive feedback on development of the accreditation systems. A survey was conducted for the accreditation procedures on 143 professionals, including professors, researchers, health and medical experts, teachers, nutrition teachers, dietitians, and clinical nutritionists. Results: The final procedure of the developed accreditation system was finalized as follows: 1) receiving application twice per year 2) complete desk review (written evaluation) by three reviewers within two months, 3) board review (all board members) and decision, and 4) notification of results. The accreditation system is set for printed materials, web-site, and materials for activities. The certificate and accreditation mark is issued to the final certified educational materials. Expiration date is established only for the web-site form. The accreditation length lasts for two years, and can be extended by renewal application. Conclusion: The dietary and nutrition related materials, which are certificated by this accreditation system, could impart reliable information and knowledge to both learners and educators, and help them in effective selection of educational materials. Therefore, this accreditation system might be expected to increase satisfaction for teaching and learning about nutrition and healthy dietary life.