• Title/Summary/Keyword: 색 정보

Search Result 1,749, Processing Time 0.028 seconds

Experimental Proof for Symmetric Ramsey Numbers (대칭 램지 수의 실험적 증명)

  • Lee, Sang-Un
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.3
    • /
    • pp.69-74
    • /
    • 2015
  • This paper offers solutions to unresolved $43{\leq}R(5,5){\leq}49$ and $102{\leq}R(6,6){\leq}165$ problems of Ramsey's number. The Ramsey's number R(s,t) of a complete graph $k_n$ dictates that n-1 number of incidental edges of a arbitrary vertex ${\upsilon}$ is dichotomized into two colors: (n-1)/2=R and (n-1)/2=B. Therefore, if one introduces the concept of distance to the vertex ${\upsilon}$, one may construct a partite graph $K_n=K_L+{\upsilon}+K_R$, to satisfy (n-1)/2=R of {$K_L,{\upsilon}$} and (n-1)/2=B of {${\upsilon},K_R$}. Subsequently, given that $K_L$ forms the color R of $K_{s-1)$, $K_S$ is attainable. Likewise, given that $K_R$ forms the color B of $K_{t-1}$, $K_t$ is obtained. By following the above-mentioned steps, $R(s,t)=K_n$ was obtained, satisfying necessary and sufficient conditions where, for $K_L$ and $K_R$, the maximum distance should be even and incidental edges of all vertices should be equal are satisfied. This paper accordingly proves R(5,5)=43 and R(6,6)=91.

A Study on the Recognition Algorithm of Paprika in the Images using the Deep Neural Networks (심층 신경망을 이용한 영상 내 파프리카 인식 알고리즘 연구)

  • Hwa, Ji Ho;Lee, Bong Ki;Lee, Dae Weon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.142-142
    • /
    • 2017
  • 본 연구에서는 파프리카를 자동 수확하기 위한 시스템 개발의 일환으로 파프리카 재배환경에서 획득한 영상 내에 존재하는 파프리카 영역과 비 파프리카 영역의 RGB 정보를 입력으로 하는 인공신경망을 설계하고 학습을 수행하고자 하였다. 학습된 신경망을 이용하여 영상 내 파프리카 영역과 비 파프리카 영역의 구분이 가능 할 것으로 사료된다. 심층 신경망을 설계하기 위하여 MS Visual studio 2015의 C++, MFC와 Python 및 TensorFlow를 사용하였다. 먼저, 심층 신경망은 입력층과 출력층, 그리고 은닉층 8개를 가지는 형태로 입력 뉴런 3개, 출력 뉴런 4개, 각 은닉층의 뉴런은 5개로 설계하였다. 일반적으로 심층 신경망에서는 은닉층이 깊을수록 적은 입력으로 좋은 학습 결과를 기대 할 수 있지만 소요되는 시간이 길고 오버 피팅이 일어날 가능성이 높아진다. 따라서 본 연구에서는 소요시간을 줄이기 위하여 Xavier 초기화를 사용하였으며, 오버 피팅을 줄이기 위하여 ReLU 함수를 활성화 함수로 사용하였다. 파프리카 재배환경에서 획득한 영상에서 파프리카 영역과 비 파프리카 영역의 RGB 정보를 추출하여 학습의 입력으로 하고 기대 출력으로 붉은색 파프리카의 경우 [0 0 1], 노란색 파프리카의 경우 [0 1 0], 비 파프리카 영역의 경우 [1 0 0]으로 하는 형태로 3538개의 학습 셋을 만들었다. 학습 후 학습 결과를 평가하기 위하여 30개의 테스트 셋을 사용하였다. 학습 셋을 이용하여 학습을 수행하기 위해 학습률을 변경하면서 학습 결과를 확인하였다. 학습률을 0.01 이상으로 설정한 경우 학습이 이루어지지 않았다. 이는 학습률에 의해 결정되는 가중치의 변화량이 너무 커서 비용 함수의 결과가 0에 수렴하지 않고 발산하는 경향에 의한 것으로 사료된다. 학습률을 0.005, 0.001로 설정 한 경우 학습에 성공하였다. 학습률 0.005의 경우 학습 횟수 3146회, 소요시간 20.48초, 학습 정확도 99.77%, 테스트 정확도 100%였으며, 학습률 0.001의 경우 학습 횟수 38931회, 소요시간 181.39초, 학습 정확도 99.95%, 테스트 정확도 100%였다. 학습률이 작을수록 더욱 정확한 학습이 가능하지만 소요되는 시간이 크고 국부 최소점에 빠질 확률이 높았다. 학습률이 큰 경우 학습 소요 시간이 줄어드는 반면 학습 과정에서 비용이 발산하여 학습이 이루어지지 않는 경우가 많음을 확인 하였다.

  • PDF

GIS-based Water Pollution Analysis (GIS기반의 오폐수 분석에 관한 연구)

  • Lee, Chol-Young;Kim, Kye-Hyun;Park, Tae-Og
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.111-116
    • /
    • 2007
  • 현재 한강수계를 제외한 3대강 수계에서 수질오염총량관리제도가 의무제로써 시행되고 있다. 그러나 과학적 타당성과 외국의 성공사례들로 하여금 한강수계에 대해서도 수질오염총량제도를 의무제화 하려는 시도가 추진되고 있고 있는 실정이다. 이 제도가 한강수계에도 도입된다면, 한강권역에 포함되는 모든 지자체는 해당 유역에서 하천으로 유입되는 배출부하량을 할당받은 할당부하량 이하로 관리하여야만 정해진 유역의 목표수질을 달성할 수 있으며, 배출부하량 관리를 계획한데로 이행하지 못한 지자체는 범칙금 내지는 행정제재를 받게 된다. 따라서 체계적이고 과학적인 모니터링 및 분석 수단이 필요하다. 이 연구는 환경부 고시 한강기술지침에 의거하여 GIS를 이용하여 인천일대의 오폐수 발생부하량 및 배출부하량을 제시하고 과학적인 오염물질 삭감방안을 모색하는 것을 목적으로 진행되었다. 생활계, 산업계, 축산계, 양식계의 4 가지로 분류된 점오염원과 토지 이용 분류에 따른 비점오염원에 대한 각각의 발생부하량을 GIS를 통해 산정하고, 모든 오염원별로 처리경로를 고려하고 처리시설별, 방법별 삭감 효율을 반영하여 배출부하량을 산정하여 GIS상에서 제시하고 분석하였다. 인천일대는 인근지역에 비해 인구밀도가 높고 산업단지가 발달하여 생활계와 산업계 오염원에 의한 발생부하량 및 배출부하량이 많았으며, 특정 오염물에 대해서는 삭감 계획이 필요함을 확인할 수 있었다. 따라서 수질오염총량관리제도에 대비하고 실제 수질 개선을 위하여 본 연구의 결과를 바탕으로 수질관리를 위한 시스템의 보완 및 삭감계획의 수립에 관한 연구가 필요하다.알 수 있었다. 이상의 결과를 토대로 기존 압출추출방법과 초임계 추출 방법을 비교한 결과 $\gamma$-토코페롤의 농도가 1.3${\~}$1.6배 증가함을 확인할 수 있었다.게 상관성이 있어 앞으로 심도 있는 연구가 더욱 필요하다.qrt{F}}}{\pm}e_0$)에서 단정도실수 및 배정도실수의 역수 제곱근 계산에 필요한 평균 곱셈 횟수를 계산한다 이들 평균 곱셈 횟수를 종래 알고리즘과 비교하여 본 논문에서 제안한 알고리즘의 우수성을 증명한다. 본 논문에서 제안한 알고리즘은 오차가 일정한 값보다 작아질 때까지만 반복하므로 역수 제곱근 계산기의 성능을 높일 수 있다. 또한 최적의 근사 역수 제곱근 테이블을 구성할 수 있다. 본 논문의 연구 결과는 디지털 신호처리, 컴퓨터 그라픽스, 멀티미디어, 과학 기술 연산 등 부동소수점 계산기가 사용되는 분야에서 폭 넓게 사용될 수 있다.>16$\%$>0$\%$ 순으로 좋게 평가되었다. 결론적으로 감농축액의 첨가는 당과 탄닌성분을 함유함으로써 인절미의 노화를 지연시키고 저장성을 높이는데 효과가 있는 것으로 생각된다. 또한 인절미를 제조할 때 찹쌀가루에 8$\%$의 감농축액을 첨가하는 것이 감인절미의 색, 향, 단맛, 씹힘성이 적당하고 쓴맛과 떫은맛은 약하게 느끼면서 촉촉한 정도와 부드러운 정도는 강하게 느낄수 있어서 전반적인 기호도에서 가장 적절한 방법으로 사료된다.비위생 점수가 유의적으로 높은 점수를 나타내었다. 조리종사자의 위생지식 점수와 위생관리

  • PDF

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Container BIC-code region extraction and recognition method using multiple thresholding (다중 이진화를 이용한 컨테이너 BIC 부호 영역 추출 및 인식 방법)

  • Song, Jae-wook;Jung, Na-ra;Kang, Hyun-soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.6
    • /
    • pp.1462-1470
    • /
    • 2015
  • The container BIC-code is a transport protocol for convenience in international shipping and combined transport environment. It is an identification code of a marine transport container which displays a wide variety of information including country's code. Recently, transportation through aircrafts and ships continues to rise. Thus fast and accurate processes are required in the ports to manage transportation. Accordingly, in this paper, we propose a BIC-code region extraction and recognition method using multiple thresholds. In the code recognition, applying a fixed threshold is not reasonable due to a variety of illumination conditions caused by change of weather, lightening, camera position, color of the container and so on. Thus, the proposed method selects the best recognition result at the final stage after applying multiple thresholds to recognition. For each threshold, we performs binarization, labeling, BIC-code pattern decision (horizontal or vertical pattern) by morphological close operation, and character separation from the BIC-code. Then, each characters is recognized by template matching. Finally we measure recognition confidence scores for all the thresholds and choose the best one. Experimental results show that the proposed method yields accurate recognition for the container BIC-code with robustness to illumination change.

Efficient Parallel Spatial Join Processing Method in a Shared-Nothing Database Cluster System (비공유 공간 클러스터 환경에서 효율적인 병렬 공간 조인 처리 기법)

  • Chung, Warn-Ill;Lee, Chung-Ho;Bae, Hae-Young
    • The KIPS Transactions:PartD
    • /
    • v.10D no.4
    • /
    • pp.591-602
    • /
    • 2003
  • Delay and discontinuance phenomenon of service are cause by sudden increase of the network communication amount and the quantity consumed of resources when Internet users are driven excessively to a conventional single large database sewer. To solve these problems, spatial database cluster consisted of several single nodes on high-speed network to offer high-performance is risen. But, research about spatial join operation that can reduce the performance of whole system in case process at single node is not achieved. So, in this paper, we propose efficient parallel spatial join processing method in a spatial database cluster system that uses data partitions and replications method that considers the characteristics of space data. Since proposed method does not need the creation step and the assignment step of tasks, and does not occur additional message transmission between cluster nodes that appear in existent parallel spatial join method, it shows performance improvement of 23% than the conventional parallel R-tree spatial join for a shared-nothing architecture about expensive spatial join queries. Also, It can minimize the response time to user because it removes redundant refinement operation at each cluster node.

An Investigation on Characteristics and Intellectual Structure of Sociology by Analyzing Cited Data (사회학 분야의 연구데이터 특성과 지적구조 규명에 관한 연구)

  • Choi, Hyung Wook;Chung, EunKyung
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.3
    • /
    • pp.109-124
    • /
    • 2017
  • Through a wide variety of disciplines, practices on data access and re-use have been increased recently. In fact, there has been an emerging phenomenon that researchers tend to use the data sets produced by other researchers and give scholarly credit as citation. With respect to this practice, in 2012, Thomson Reuters launched Data Citation Index (DCI). With the DCI, citation to research data published by researchers are collected and analyzed in a similar way for citation to journal articles. The purpose of this study is to identify the characteristics and intellectual structure of sociology field based on research data, which is one of actively data-citing fields. To accomplish this purpose, two data sets were collected and analyzed. First, from DCI, a total of 8,365 data were collected in the field of sociology. Second, a total of 12,132 data were collected from Web of Science with a topic search with 'Sociology'. As a result of the co-word analysis of author provided-keywords for both data sets, the intellectual structure of research data-based sociology was composed of two areas and 15 clusters and that of article-based sociology was composed with three areas and 17 clusters. More importantly, medical science area was found to be actively studied in research data-based sociology and public health and psychology are identified to be central areas from data citation.

The Effect of the Quality of Pre-Assigned Subject Categories on the Text Categorization Performance (학습문헌집합에 기 부여된 범주의 정확성과 문헌 범주화 성능)

  • Shim, Kyung;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.265-285
    • /
    • 2006
  • In text categorization a certain level of correctness of labels assigned to training documents is assumed without solid knowledge on that of real-world collections. Our research attempts to explore the quality of pre-assigned subject categories in a real-world collection, and to identify the relationship between the quality of category assignment in training set and text categorization performance. Particularly, we are interested in to what extent the performance can be improved by enhancing the quality (i.e., correctness) of category assignment in training documents. A collection of 1,150 abstracts in computer science is re-classified by an expert group, and divided into 907 training documents and 227 test documents (15 duplicates are removed). The performances of before and after re-classification groups, called Initial set and Recat-1/Recat-2 sets respectively, are compared using a kNN classifier. The average correctness of subject categories in the Initial set is 16%, and the categorization performance with the Initial set shows 17% in $F_1$ value. On the other hand, the Recat-1 set scores $F_1$ value of 61%, which is 3.6 times higher than that of the Initial set.

Development of test methodology and detail standard for ECDIS (선박항해용전자해도시스템 인증 기준 및 시험기술 개발)

  • 심우성;서상현
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2004.04a
    • /
    • pp.269-274
    • /
    • 2004
  • The marine electronic system for safe navigation such as ECDIS has been contributing to increase the safety of navigation, decreasing the mariner's load of navigation. The ECDIS should be developed and approved by international standard of IMO for performance standard and IEC for type-approval method and required results. However, these standards have some ambiguities for us not to directly adopt them for real approval system, so we should analyze them for more clear meaning and prepare our own detail standard for type-approval system. The first thing to do for the goal of this research was to analyze the standard in detail and make ambiguity be cleared in our own standards, considering each test item in view of test methodology. For the result of analysis we could develop more evident and detail type-approval standard for each test item with test technology needed. Especially, we developed the colour differentiation test process of ECDIS monitor, which include the colour differentiation formula derived from CIE colour scheme. Several test items require sensor informations of navigation equipment compatible with IEC 61162. We also developed the signal simulator for general messages of IEC 61162 that must be provided. Additionally, the type-approval processes and standards for Back-up arrangement and RCDS mode were developed.

  • PDF

Adaptive Image Content-Based Retrieval Techniques for Multiple Queries (다중 질의를 위한 적응적 영상 내용 기반 검색 기법)

  • Hong Jong-Sun;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.73-80
    • /
    • 2005
  • Recently there have been many efforts to support searching and browsing based on the visual content of image and multimedia data. Most existing approaches to content-based image retrieval rely on query by example or user based low-level features such as color, shape, texture. But these methods of query are not easy to use and restrict. In this paper we propose a method for automatic color object extraction and labelling to support multiple queries of content-based image retrieval system. These approaches simplify the regions within images using single colorizing algorithm and extract color object using proposed Color and Spatial based Binary tree map(CSB tree map). And by searching over a large of number of processed regions, a index for the database is created by using proposed labelling method. This allows very fast indexing of the image by color contents of the images and spatial attributes. Futhermore, information about the labelled regions, such as the color set, size, and location, enables variable multiple queries that combine both color content and spatial relationships of regions. We proved our proposed system to be high performance through experiment comparable with another algorithm using 'Washington' image database.