• Title/Summary/Keyword: 컴퓨터화 평가 시스템

Search Result 357, Processing Time 0.028 seconds

A Benchmark of Micro Parallel Computing Technology for Real-time Control in Smart Farm (MPICH vs OpenMP) (제목을스마트 시설환경 실시간 제어를 위한 마이크로 병렬 컴퓨팅 기술 분석)

  • Min, Jae-Ki;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.161-161
    • /
    • 2017
  • 스마트 시설환경의 제어 요소는 난방기, 창 개폐, 수분/양액 밸브 개폐, 환풍기, 제습기 등 직접적으로 시설환경의 조절에 관여하는 인자와 정보 교환을 위한 통신, 사용자 인터페이스 등 간접적으로 제어에 관련된 요소들이 복합적으로 존재한다. PID 제어와 같이 하는 수학적 논리를 바탕으로 한 제어와 전문 관리자의 지식을 기반으로 한 비선형 학습 모델에 의한 제어 등이 공존할 수 있다. 이러한 다양한 요소들을 복합적으로 연동시키기 위해선 기존의 시퀀스 기반 제어 방식에는 한계가 있을 수 있다. 관행의 방식과 같이 시계열 상에서 획득한 충분한 데이터를 이용하여 제어의 양과 시점을 결정하는 방식은 예외 상황에 충분히 대처하기 어려운 단점이 있을 수 있다. 이러한 예외 상황은 자연적인 조건의 변화에 따라 불가피하게 발생하는 경우와 시스템의 오류에 기인하는 경우로 나뉠 수 있다. 본 연구에서는 실시간으로 변하는 시설환경 내의 다양한 환경요소를 실시간으로 분석하고 상응하는 제어를 수행하여 수학적이며 예측 가능한 논리에 의해 준비된 제어시스템을 보완할 방법을 연구하였다. 과거의 고성능 컴퓨팅(HPC; High Performance Computing)은 다수의 컴퓨터를 고속 네트워크로 연동하여 집적적으로 연산능력을 향상시킨 기술로 비용과 규모의 측면에서 많은 투자를 필요로 하는 첨단 고급 기술이었다. 핸드폰과 모바일 장비의 발달로 인해 소형 마이크로프로세서가 발달하여 근래 2 Ghz의 클럭 속도에 이르는 어플리케이션 프로세서(AP: Application Processor)가 등장하기도 하였다. 상대적으로 낮은 성능에도 불구하고 저전력 소모와 플랫폼의 소형화를 장점으로 한 AP를 시설환경의 실시간 제어에 응용하기 위한 방안을 연구하였다. CPU의 클럭, 메모리의 양, 코어의 수량을 다음과 같이 달리한 3가지 시스템을 비교하여 AP를 이용한 마이크로 클러스터링 기술의 성능을 비교하였다.1) 1.5 Ghz, 8 Processors, 32 Cores, 1GByte/Processor, 32Bit Linux(ARMv71). 2) 2.0 Ghz, 4 Processors, 32 Cores, 2GByte/Processor, 32Bit Linux(ARMv71). 3) 1.5 Ghz, 8 Processors, 32 Cores, 2GByte/Processor, 64Bit Linux(Arch64). 병렬 컴퓨팅을 위한 개발 라이브러리로 MPICH(www.mpich.org)와 Open-MP(www.openmp.org)를 이용하였다. 2,500,000,000에 이르는 정수 중 소수를 구하는 연산에 소요된 시간은 1)17초, 2)13초, 3)3초 이었으며, $12800{\times}12800$ 크기의 행렬에 대한 2차원 FFT 연산 소요시간은 각각 1)10초, 2)8초, 3)2초 이었다. 3번 경우는 클럭속도가 3Gh에 이르는 상용 데스크탑의 연산 속도보다 빠르다고 평가할 수 있다. 라이브러리의 따른 결과는 근사적으로 동일하였다. 선행 연구에서 획득한 3차원 계측 데이터를 1초 단위로 3차원 선형 보간법을 수행한 경우 코어의 수를 4개 이하로 한 경우 근소한 차이로 동일한 결과를 보였으나, 코어의 수를 8개 이상으로 한 경우 앞선 결과와 유사한 경향을 보였다. 현장 보급 가능성, 구축비용 및 전력 소모 등을 종합적으로 고려한 AP 활용 마이크로 클러스터링 기술을 지속적으로 연구할 것이다.

  • PDF

A Fast Scattered Pilot Synchronization Algorithm for DVB-H receiver modem (DVB-H 수신기 모뎀을 위한 고속 분산 파일럿 동기 알고리즘)

  • Um Jung-Sun;Do Joo-Hyun;Lee Hyun;Choi Hyung-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.1081-1091
    • /
    • 2005
  • Unlike conventional DVB-T transmission with the streaming method, DVB-H system based on the IPDC(IP Data Casting) method uses Time-slicing scheme to achieve the maximization of portability by reducing the power consumption of a receiver. To enhance the power efficiency of the receiver, Time-slicing scheme controls the receiver operation to perform only for corresponding burst in specific time slot. The additional power saving can also be achieved by reducing the required time for synchronization. In this paper, we propose a fast scattered pilot synchronization algorithm, which detects the pilot pattern of currently received OFDM symbol. The proposed scheme is based on the correlation between the adjacent subcarriers of potential scattered pilot position in two consecutively received OFDM symbols. Therefore, it can reduce the time for the scattered pilot synchronization within two symbols as com-pared with the conventional method used for DVB-T. And the proposed algorithm has better performance than the two schemes proposed by Nokia for DVB-H and the method using correlation with reference signal. Extensive com-puter simulation is performed based on ETSI EN300 744 ETSI and performance results show that the proposed algorithm has more efficient and stable operation than the conventional schemes.

Improvement of ISMS Certification Components for Virtual Asset Services: Focusing on CCSS Certification Comparison (안전한 가상자산 서비스를 위한 ISMS 인증항목 개선에 관한 연구: CCSS 인증제도 비교를 중심으로)

  • Kim, Eun Ji;Koo, Ja Hwan;Kim, Ung Mo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.8
    • /
    • pp.249-258
    • /
    • 2022
  • Since the advent of Bitcoin, various virtual assets have been actively traded through virtual asset services of virtual asset exchanges. Recently, security accidents have frequently occurred in virtual asset exchanges, so the government is obligated to obtain information security management system (ISMS) certification to strengthen information protection of virtual asset exchanges, and 56 additional specialized items have been established. In this paper, we compared the domain importance of ISMS and CryptoCurrency Security Standard (CCSS) which is a set of requirements for all information systems that make use of cryptocurrencies, and analyzed the results after mapping them to gain insight into the characteristics of each certification system. Improvements for 4 items of High Level were derived by classifying the priorities for improvement items into 3 stages: High, Medium, and Low. These results can provide priority for virtual asset and information system security, support method and systematic decision-making on improvement of certified items, and contribute to vitalization of virtual asset transactions by enhancing the reliability and safety of virtual asset services.

Managing the Reverse Extrapolation Model of Radar Threats Based Upon an Incremental Machine Learning Technique (점진적 기계학습 기반의 레이더 위협체 역추정 모델 생성 및 갱신)

  • Kim, Chulpyo;Noh, Sanguk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.4
    • /
    • pp.29-39
    • /
    • 2017
  • Various electronic warfare situations drive the need to develop an integrated electronic warfare simulator that can perform electronic warfare modeling and simulation on radar threats. In this paper, we analyze the components of a simulation system to reversely model the radar threats that emit electromagnetic signals based on the parameters of the electronic information, and propose a method to gradually maintain the reverse extrapolation model of RF threats. In the experiment, we will evaluate the effectiveness of the incremental model update and also assess the integration method of reverse extrapolation models. The individual model of RF threats are constructed by using decision tree, naive Bayesian classifier, artificial neural network, and clustering algorithms through Euclidean distance and cosine similarity measurement, respectively. Experimental results show that the accuracy of reverse extrapolation models improves, while the size of the threat sample increases. In addition, we use voting, weighted voting, and the Dempster-Shafer algorithm to integrate the results of the five different models of RF threats. As a result, the final decision of reverse extrapolation through the Dempster-Shafer algorithm shows the best performance in its accuracy.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Occlusal Analysis of the Subjects with Chewing Side Preference Using the T-Scan II System (T-Scan II 시스템을 이용한 습관적 편측저작자들의 교합 분석)

  • Park, Eun-Hee;Kim, Mee-Eun;Kim, Ki-Suk
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.3
    • /
    • pp.245-254
    • /
    • 2006
  • While orofacial pain or various dental factors are generally considered as the primary cause of unilateral chewing tendency, there exist several studies indicating that dental factors did not affect the preferred chewing side. The aim of this study was to examine difference of occlusal scheme between the subjects with and without chewing side preference. The difference between the chewing and non-chewing sides in the unilateral chewing group was investigated as well. Computerized, T-Scan II system was used for occlusal analysis. 20 subjects for the unilateral chewing group (mean age of $25.25{\pm}2.84$ years) and 20 subjects for the bilateral chewing group (mean age of $27.00{\pm}5.07$ years) were selected by a questionnaire on presence or absence of chewing side preference and those with occlusal problem or pain and/or dysfunction of jaw were excluded. T-Scan recordings were obtained during maximum intercuspation and excursion movement. The number of contact points, relative occlusal force ratio between right and left sides, tooth sliding area and elapsed time throughout the maximum intercuspation were calculated. Elapsed time for excursion was also investigated. The results of this study shows that the unilateral chewing group had the smaller average tooth contact areas compared with those of the bilateral group (p<0.005). In the unilateral chewing group, the contact areas of non-chewing side are smaller than those of chewing side (p<0.005). The contact areas on their preferred sides were not significantly different with those of right or left side of the subjects without chewing side preference. There was no significant difference in the elapsed time during maximum intercuspation and lateral excursion, the sliding areas and relative of right-to-left occlusal force ratio between the two groups. From the results of this study, it is likely that individuals prefer chewing on the side with more contact areas for efficient chewing.

Performance Evaluation of DSE-MMA Blind Equalization Algorithm in QAM System (QAM 시스템에서 DSE-MMA 블라인드 등화 알고리즘의 성능 평가)

  • Kang, Dae-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.115-121
    • /
    • 2013
  • This paper related with the DSE-MMA (Dithered Sign-Error MMA) that is the simplification of computational arithmetic number in blind equalization algorithm in order to compensates the intersymbol interference which occurs the passing the nonlinear communication channel in the presence of the band limit and phase distortion. The SE-MMA algorithm has a merit of H/W implementation for the possible to reduction of computational arithmetic number using the 1 bit quantizer in stead of multiplication in the updating the equalizer tap weight. But it degradates the overall blind equalization algorithm performance by the information loss at the quantization process compare to the MMA. The DSE-MMA which implements the dithered signed-error concepts by using the dither signal before qualtization are added to MMA, then the improved SNR performance which represents the roburstness of equalization algorithm are obtained. It has a concurrently compensation capability of the amplitude and phase distortion due to intersymbol interference like as the SE-MMA and MMA algorithm. The paper uses the equalizer output signal, residual isi, MD, MSE learning curve and SER curve for the performance index of blind equalization algorithm, and the computer simulation were performed in order to compare the SE-MMA and DSE-MMA applying the same performance index. As a result of simulation, the DSE-MMA can improving the roburstness and the value of every performance index after steady state than the SE-MMA, and confirmed that the DSE-MMA has slow convergence speed which meaning the reaching the seady state from initial state of adaptive equalization filter.

Signifying Practices of Technoculture in the age of Data Capitalism: Cultural and Political Alternative after the Financial Crisis of 2008 (데이터자본주의 시대 테크노컬처의 의미화 실천: 2008년 글로벌 금융위기 이후의 문화정치적 대안)

  • Lim, Shan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.143-148
    • /
    • 2022
  • The subject of this paper is the practical examples of technoculture that critically thinks network technology, a strong material foundation in the era of data capitalism in the 21st century, and appropriates its socio-cultural metaphor as an artistic potential. In order to analyze its alternatives and the meaning of cultural politics, this paper examines the properties and influence of data capitalism after the 2008 global financial crisis, and the cultural and artistic context formed by its reaction. The first case considered in this paper, Furtherfield's workshop, provided a useful example of how citizens can participate in social change through learning and education in which art and technology are interrelated. The second case, Greek hackerspace HSGR, developed network technology as a tool to overcome the crisis by proposing a new progressive cultural commons due to Greece's financial crisis caused by the global financial crisis and a decrease in the state's creative support. The third case, Paolo Cirio's project, promoted a critical citizenship towards the state and community systems as dominant types of social governance. These technoculture cases can be evaluated as efforts to combine and rediscover progressive political ideology and its artistic realization tradition in the context of cultural politics, paying attention to the possibility of signifying practices of network technology that dominates the contemporary economic system.

Performance Analysis of Implementation on Image Processing Algorithm for Multi-Access Memory System Including 16 Processing Elements (16개의 처리기를 가진 다중접근기억장치를 위한 영상처리 알고리즘의 구현에 대한 성능평가)

  • Lee, You-Jin;Kim, Jea-Hee;Park, Jong-Won
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.8-14
    • /
    • 2012
  • Improving the speed of image processing is in great demand according to spread of high quality visual media or massive image applications such as 3D TV or movies, AR(Augmented reality). SIMD computer attached to a host computer can accelerate various image processing and massive data operations. MAMS is a multi-access memory system which is, along with multiple processing elements(PEs), adequate for establishing a high performance pipelined SIMD machine. MAMS supports simultaneous access to pq data elements within a horizontal, a vertical, or a block subarray with a constant interval in an arbitrary position in an $M{\times}N$ array of data elements, where the number of memory modules(MMs), m, is a prime number greater than pq. MAMS-PP4 is the first realization of the MAMS architecture, which consists of four PEs in a single chip and five MMs. This paper presents implementation of image processing algorithms and performance analysis for MAMS-PP16 which consists of 16 PEs with 17 MMs in an extension or the prior work, MAMS-PP4. The newly designed MAMS-PP16 has a 64 bit instruction format and application specific instruction set. The author develops a simulator of the MAMS-PP16 system, which implemented algorithms can be executed on. Performance analysis has done with this simulator executing implemented algorithms of processing images. The result of performance analysis verifies consistent response of MAMS-PP16 through the pyramid operation in image processing algorithms comparing with a Pentium-based serial processor. Executing the pyramid operation in MAMS-PP16 results in consistent response of processing time while randomly response time in a serial processor.

한냉혈관반응 측정에 관한 연구

  • 정종만;이영숙
    • Proceedings of the ESK Conference
    • /
    • 1997.04a
    • /
    • pp.203-211
    • /
    • 1997
  • 본 연구는 기온$15^{\circ}C{\pm}1^{\circ}C$ $26^{\circ}C{\pm}1^{\circ}C$,습도$55%{\pm}5%$ 환경조건에서 손가락 끝마디 부분을 얼음물에 침지시킨후 구강온과 4부위 피부온, 손가락끝 피부온, 전신온냉감, 전신쾌적감, 손가락 끝 동통감의 변화를 젊은 남자 피험자와 노인남자 피험자를 대상으로 비교측정하고자 하였다. 본 학회에서는 젊은 남자피험자 그룹에 대하여 보고하고자 한다. 결과는 다 음과 같다. $15^{\circ}C{\pm}1^{\circ}C$에서 4부위 피부온을 보면 가슴과 상완은 손가락끝 침지시 약간 하강하고 다시 상승하지만 대퇴와 하퇴에서는 하강하고 그 상태가 유지된다. 특히 하퇴의 경우는 급격히 하강하는 경향을 보이고 있다. 손가락끝 피부온은 손가락 침지와 동시에 급격한 하강을 나타내나 손가락을 꺼낸후에 손가락 침지전의 온도로 회복되지는 않았다. 평균피부온을 보면 손가락 침지시 하강하는 경향을 보이고 있다. 전신 쾌적감은 약간 불쾌하게 나타났고, 전신온냉감은 서늘하다고나타났고 손가락끝의 동통 감은 매우 아프다고 나타났다. $26^{\circ}C{\pm}1^{\circ}C$에서 4부위 피부온을 보면 가슴 상완대퇴 하퇴 모두 손가락끝 침지시 약간 하강하고 낮은 상태로 유지되는 경향을 보이고 있다. 손가락끝 피부온은 손가락 침지시 급격한 하강을 나타내었고 손가락을 꺼낸후에도 침지의 온도로 회복이 되었다. 평균피부온은 손가락 침지후에 약간 하강하였지만 큰 차이는 없었다. 전신쾌적감은 약간 쾌적하게 나타났고 전신온냉감은 약간 따뜻하다라고 나타났으며 손가락끝의 동통감은 약간 아프다고쪽으로 나타났다.때문에 이를 디자인에 곧바로 적용시키기 어려운 점이 있다. 이에 본 연구는 기존의 바용성 평가를 위한 분석도구들이 갖는 문제 점들 해결하여 제품의 사용자 인터페이스 디자인 개발과정에서 활용할 수 있는 평가 분석도구를 개발하는 것을 목표로 한다. 이를 위해 첫째, 다양한 유형의 정보를 포함하는 비디오 정보를 선정하였따. 둘째, 데이터를 다양한 측면에서 추출할 수 있는 Data logger를 개발하였다. 셋째, 데이터를 시각적으로 정리하고 분석할 수 있는 도구를 제안한다. 마지막으로 인터페이스 디자인에서 여러 가지 디자인안을 도출해 내는 작업에 이용할 수 있는 종합화과정을 개발한다. 이러한 일련의 과정이 통합된 컴퓨터 시스템 안에서 이루어지도록 프로그램을 개발하여 정보의 유용성을 높일 수 있도록 한다.at the entropy index as a measurement of inter-business relatedness is not significant but technological relatedness index is significant. OLS estimates on pooled data were considerably different from FEM or REM estimates on panel data. By introducing interaction effect among the three variables for business portfolio properties, we obtained three findings. First, only VI (Vertical integration) has a significant positive correlation with ROS. Second, when using TFP growth as an depende

  • PDF