• Title/Summary/Keyword: 환경정보시스템

Search Result 16,678, Processing Time 0.043 seconds

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

Development of a Testing Environment for Parallel Programs based on MSC Specifications (MSC 명세를 기반으로 한 병렬 프로그램 테스팅 환경의 개발)

  • Kim, Hyeon-Soo;Bae, Hyun-Seop;Chung, In-Sang;Kwon, Yong-Rae;Chung, Young-Sik;Lee, Byung-Sun;Lee, Dong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.2
    • /
    • pp.135-149
    • /
    • 2000
  • Most of prior works on testing parallel programs have concentrated on how to guarantee the reproducibility by employing event traces exercised during executions of a program. Consequently, little work has been done to generate test cases, especially, from specifications produced from software development process. In this research work, we devise the techniques for deriving test cases automatically from the specifications written in Message Sequence Charts(MSCs) which are widely used in telecommunication areas and develop the testing environment for performing module testing of parallel programs with derived test cases. For deriving test cases from MSCs, we have to uncover the causality relations among events embedded implicitly in MSCs. For this, we devise the methods for adapting vector time stamping to MSCs, Then, valid event sequences, satisfying the causality relations, are generated and these are used as test cases. The generated test cases, written in TTCN, are translated into CHILL source codes, which interact with a target module to be tested and test the validity of behaviors of the module. Since the testing method developed in this research work extracts test cases from the MSC specifications produced front telecommunications software development process, it is not necessary to describe auxiliary specifications for testing. In audition adapting vector time stamping generates automatically the event sequences, the generated event sequences that are ones for whole system can be used for individual testing purpose.

  • PDF

Participation Level in Online Knowledge Sharing: Behavioral Approach on Wikipedia (온라인 지식공유의 참여정도: 위키피디아에 대한 행태적 접근)

  • Park, Hyun Jung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.97-121
    • /
    • 2013
  • With the growing importance of knowledge for sustainable competitive advantages and innovation in a volatile environment, many researches on knowledge sharing have been conducted. However, previous researches have mostly relied on the questionnaire survey which has inherent perceptive errors of respondents. The current research has drawn the relationship among primary participant behaviors towards the participation level in knowledge sharing, basically from online user behaviors on Wikipedia, a representative community for online knowledge collaboration. Without users' participation in knowledge sharing, knowledge collaboration for creating knowledge cannot be successful. By the way, the editing patterns of Wikipedia users are diverse, resulting in different revisiting periods for the same number of edits, and thus varying results of shared knowledge. Therefore, we illuminated the participation level of knowledge sharing from two different angles of number of edits and revisiting period. The behavioral dimensions affecting the level of participation in knowledge sharing includes the article talk for public discussion and user talk for private messaging, and community registration, which are observable on Wiki platform. Public discussion is being progressed on article talk pages arranged for exchanging ideas about each article topic. An article talk page is often divided into several sections which mainly address specific type of issues raised during the article development procedure. From the diverse opinions about the relatively trivial things such as what text, link, or images should be added or removed and how they should be restructured to the profound professional insights are shared, negotiated, and improved over the course of discussion. Wikipedia also provides personal user talk pages as a private messaging tool. On these pages, diverse personal messages such as casual greetings, stories about activities on Wikipedia, and ordinary affairs of life are exchanged. If anyone wants to communicate with another person, he or she visits the person's user talk page and leaves a message. Wikipedia articles are assessed according to seven quality grades, of which the featured article level is the highest. The dataset includes participants' behavioral data related with 2,978 articles, which have reached the featured article level, with editing histories of articles, their article talk histories, and user talk histories extracted from user talk pages for each article. The time period for analysis is from the initiation of articles until their promotion to the featured article level. The number of edits represents the total number of participation in the editing of an article, and the revisiting period is the time difference between the first and last edits. At first, the participation levels of each user category classified according to behavioral dimensions have been analyzed and compared. And then, robust regressions have been conducted on the relationships among independent variables reflecting the degree of behavioral characteristics and the dependent variable representing the participation level. Especially, through adopting a motivational theory adequate for online environment in setting up research hypotheses, this work suggests a theoretical framework for the participation level of online knowledge sharing. Consequently, this work reached the following practical behavioral results besides some theoretical implications. First, both public discussion and private messaging positively affect the participation level in knowledge sharing. Second, public discussion exerts greater influence than private messaging on the participation level. Third, a synergy effect of public discussion and private messaging on the number of edits was found, whereas a pretty weak negative interaction effect of them on the revisiting period was observed. Fourth, community registration has a significant impact on the revisiting period, whereas being insignificant on the number of edits. Fifth, when it comes to the relation generated from private messaging, the frequency or depth of relation is shown to be more critical than the scope of relation for the participation level.

Monitoring of a Time-series of Land Subsidence in Mexico City Using Space-based Synthetic Aperture Radar Observations (인공위성 영상레이더를 이용한 멕시코시티 시계열 지반침하 관측)

  • Ju, Jeongheon;Hong, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1657-1667
    • /
    • 2021
  • Anthropogenic activities and natural processes have been causes of land subsidence which is sudden sinking or gradual settlement of the earth's solid surface. Mexico City, the capital of Mexico, is one of the most severe land subsidence areas which are resulted from excessive groundwater extraction. Because groundwater is the primary water resource occupies almost 70% of total water usage in the city. Traditional terrestrial observations like the Global Navigation Satellite System (GNSS) or leveling survey have been preferred to measure land subsidence accurately. Although the GNSS observations have highly accurate information of the surfaces' displacement with a very high temporal resolution, it has often been limited due to its sparse spatial resolution and highly time-consuming and high cost. However, space-based synthetic aperture radar (SAR) interferometry has been widely used as a powerful tool to monitor surfaces' displacement with high spatial resolution and high accuracy from mm to cm-scale, regardless of day-or-night and weather conditions. In this paper, advanced interferometric approaches have been applied to get a time-series of land subsidence of Mexico City using four-year-long twenty ALOS PALSAR L-band observations acquired from Feb-11, 2007 to Feb-22, 2011. We utilized persistent scatterer interferometry (PSI) and small baseline subset (SBAS) techniques to suppress atmospheric artifacts and topography errors. The results show that the maximum subsidence rates of the PSI and SBAS method were -29.5 cm/year and -27.0 cm/year, respectively. In addition, we discuss the different subsidence rates where the study area is discriminated into three districts according to distinctive geotechnical characteristics. The significant subsidence rate occurred in the lacustrine sediments with higher compressibility than harder bedrock.

Process Governance Meta Model and Framework (프로세스 거버넌스 메타모델과 프레임워크)

  • Lee, JungGyu;Jeong, Seung Ryul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.63-72
    • /
    • 2019
  • As a sub-concept of corporate or organization governance, business governance and IT governance have become major research topics in academia. However, despite the importance of process as a construct for mediating the domain between business and information technology, research on process governance is relatively inadequate. Process Governance focuses on activities that link business strategy with IT system implementation and explains the creation of corporate core values. The researcher studied the basic conceptual governance models of political science, sociology, public administration, and classified governance styles into six categories. The researcher focused on the series of metamodels. For examples, the traditional Strategy Alignment Model(SAM) by Henderson and Venkatraman which is replaced by the neo-SAM model, organizational governance network model, sequential organization governance model, organization governance meta model, process governance CUBE model, COSO and process governance CUBE comparison model, and finally Process Governance Framework and etc. The Major difference between SAM and neo-SAM model is Process Governance domain inserted between Business Governance and IT Governance. Among several metamodels, Process Governance framework, the core conceptual model consists of four activity dimensions: strategic aligning, human empowering, competency enhancing, and autonomous organizing. The researcher designed five variables for each activity dimensions, totally twenty variables. Besides four activity dimensions, there are six driving forces for Process Governance cycle: De-normalizing power, micro-power, vitalizing power, self-organizing power, normalizing power and sense-making. With four activity dimensions and six driving powers, an organization can maintain the flexibility of process governance cycle to cope with internal and external environmental changes. This study aims to propose the Process Governance competency model and Process Governance variables. The situation of the industry is changing from the function-oriented organization management to the process-oriented perspective. Process Governance framework proposed by the researcher will be the contextual reference models for the further diffusion of the research on Process Governance domain and the operational definition for the development of Process Governance measurement tools in detail.

A Study on Industry-specific Sustainability Strategy: Analyzing ESG Reports and News Articles (산업별 지속가능경영 전략 고찰: ESG 보고서와 뉴스 기사를 중심으로)

  • WonHee Kim;YoungOk Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.287-316
    • /
    • 2023
  • As global energy crisis and the COVID-19 pandemic have emerged as social issues, there is a growing demand for companies to move away from profit-centric business models and embrace sustainable management that balances environmental, social, and governance (ESG) factors. ESG activities of companies vary across industries, and industry-specific weights are applied in ESG evaluations. Therefore, it is important to develop strategic management approaches that reflect the characteristics of each industry and the importance of each ESG factor. Additionally, with the stance of strengthened focus on ESG disclosures, specific guidelines are needed to identify and report on sustainable management activities of domestic companies. To understand corporate sustainability strategies, analyzing ESG reports and news articles by industry can help identify strategic characteristics in specific industries. However, each company has its own unique strategies and report structures, making it difficult to grasp detailed trends or action items. In our study, we analyzed ESG reports (2019-2021) and news articles (2019-2022) of six companies in the 'Finance,' 'Manufacturing,' and 'IT' sectors to examine the sustainability strategies of leading domestic ESG companies. Text mining techniques such as keyword frequency analysis and topic modeling were applied to identify industry-specific, ESG element-specific management strategies and issues. The analysis revealed that in the 'Finance' sector, customer-centric management strategies and efforts to promote an inclusive culture within and outside the company were prominent. Strategies addressing climate change, such as carbon neutrality and expanding green finance, were also emphasized. In the 'Manufacturing' sector, the focus was on creating sustainable communities through occupational health and safety issues, sustainable supply chain management, low-carbon technology development, and eco-friendly investments to achieve carbon neutrality. In the 'IT' sector, there was a tendency to focus on technological innovation and digital responsibility to enhance social value through technology. Furthermore, the key issues identified in the ESG factors were as follows: under the 'Environmental' element, issues such as greenhouse gas and carbon emission management, industry-specific eco-friendly activities, and green partnerships were identified. Under the 'Social' element, key issues included social contribution activities through stakeholder engagement, supporting the growth and coexistence of members and partner companies, and enhancing customer value through stable service provision. Under the 'Governance' element, key issues were identified as strengthening board independence through the appointment of outside directors, risk management and communication for sustainable growth, and establishing transparent governance structures. The exploration of the relationship between ESG disclosures in reports and ESG issues in news articles revealed that the sustainability strategies disclosed in reports were aligned with the issues related to ESG disclosed in news articles. However, there was a tendency to strengthen ESG activities for prevention and improvement after negative media coverage that could have a negative impact on corporate image. Additionally, environmental issues were mentioned more frequently in news articles compared to ESG reports, with environmental-related keywords being emphasized in the 'Finance' sector in the reports. Thus, ESG reports and news articles shared some similarities in content due to the sharing of information sources. However, the impact of media coverage influenced the emphasis on specific sustainability strategies, and the extent of mentioning environmental issues varied across documents. Based on our study, the following contributions were derived. From a practical perspective, companies need to consider their characteristics and establish sustainability strategies that align with their capabilities and situations. From an academic perspective, unlike previous studies on ESG strategies, we present a subdivided methodology through analysis considering the industry-specific characteristics of companies.

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.

An Understanding the Opening Style of the West Philippine Basin Through Multibeam High-Resolution Bathymetry (고해상도 다중빔음향측심 지형자료 분석을 통한 서필리핀분지의 진화 연구)

  • Hanjin Choe;Hyeonuk Shin
    • Journal of the Korean earth science society
    • /
    • v.44 no.6
    • /
    • pp.643-654
    • /
    • 2023
  • The West Philippine Basin, an oceanic basin half the size of the Philippine Sea Plate, lies in the western part of the plate and south of the Korean Peninsula on the Eurasian Plate. It subducts beneath the Eurasian Plate and the Philippine Islands bordering the Ryukyu Trench and the Philippine Trench with 25-50% of this basin already consumed. However, the history of the opening of the basin's southern region has been a topic of debate. The non-transform discontinuity formed during the seafloor spreading is similar to the transform fault boundaries normally perpendicular to mid-ocean ridge axes; however, it was created irregularly due to ridge propagations caused by variations of mantle convection attributable to magma supply changes. By analyzing high-resolution multi-beam echo-sounding data, we confirmed that the non-transform discontinuity due to the propagating rift evolved in the entire basin and that the abyssal hill strike direction changed from E-W to NNW-SSE from the fossil spreading center. In the early stage of basin extension, the Amami-Sankaku Basin was rotated 90 degrees clockwise from its current orientation, and it bordered the Palau Basin along the Mindanao Fracture Zone. The Amami-Sankaku Basin separated from the Palau Basin while the spreading of the West Philippine Basin began with a counter-clockwise rotation. This indicates that the non-transform discontinuities formed by a sudden change in magma supply due to the drift of the Philippine Sea Plate and simultaneously with the rapid changes in the spreading direction from ENE-WSW to N-S. The Palau Basin was considered to be the sub-south of the West Philippine Basin, but recent studies have shown that it extends into an independent system. Evidence from sediment layers and crustal thickness hints at the possibility of its existence before the West Philippine Basin opened, although its evolution continues to be debated. We performed a combined analysis using high-resolution multi-beam bathymetry and satellite gravity data to uncover new insights into the evolution of the West Philippine Basin. This information illuminates the complex plate interactions and provides a crucial contribution toward understanding the opening history of the basin and the Philippine Sea Plate.