• Title/Summary/Keyword: Network structure

Search Result 5,385, Processing Time 0.034 seconds

Southeast Asian Hindu Art from the 6th to the 7th Centuries (6-7세기의 동남아 힌두 미술 - 인도 힌두미술의 전파와 초기의 변용 -)

  • Kang, Heejung
    • The Southeast Asian review
    • /
    • v.20 no.3
    • /
    • pp.263-297
    • /
    • 2010
  • The relics of the Southeast Asian civilizations in the first phase are found with the relics from India, China, and even further West of Persia and Rome. These relics are the historic marks of the ancient interactions of various continents, mainly through the maritime trade. The traces of the indic culture, which appears in the historic age, are represented in the textual records and arts, regarded as the essence of the India itself. The ancient Hindu arts found in various locations of Southeast Asia were thought to be transplanted directly from India. However, Neither did the Gupta Hindu Art of India form the mainstream of the Gupta Art, nor did it play an influential role in the adjacent areas. The Indian culture was transmitted to Southeast Asia rather intermittently than consistently. If we thoroughly compare the early Hindu art of India and that of Southeast Asia, we can find that the latter was influenced by the former, but still sustained Southeast Asian originality. The reason that the earliest Southeast Asian Hindu art is discovered mostly in continental Southeast Asia is resulted from the fact that the earliest networks between India and the region were constructed in this region. Among the images of Hindu gods produced before the 7th century are Shiva, Vishnu, Harihara, and Skanda(the son of Shiva), and Ganesha(the god of wealth). The earliest example of Vishnu was sculpted according to the Kushan style. After that, most of the sculptures came to have robust figures and graceful proportions. There are a small number of images of Ganesha and Skanda. These images strictly follow the iconography of the Indian sculpture. This shows that Southeast Asians chose their own Hindu gods from the Hindu pantheon selectively and devoted their faiths to them. Their basic iconography obediently followed the Indian model, but they tried to transform parts of the images within the Southeast Asian contexts. However, it is very difficult to understand the process of the development of the Hindu faith and its contents in the ancient Southeast Asia. It is because there are very few undamaged Hindu temples left in Southeast Asia. It is also difficult to make sure that the Hindu religion of India, which was based on the complex rituals and the caste system, was transplanted to Southeast Asia, because there were no such strong basis of social structure and religion in the region. "Indianization" is an organized expansion of the Indian culture based on the sense of belonging to an Indian context. This can be defined through the process of transmission and progress of the Hindu or Buddhist religions, legends about purana, and the influx of various epic expression and its development. Such conditions are represented through the Sanskrit language and the art. It is the element of the Indian culture to fabricate an image of god as a devotional object. However, if we look into details of the iconography, style, and religious culture, these can be understood as a "selective reception of foreign religious culture." There were no sophisticated social structure yet to support the Indian culture to continue in Southeast Asia around the 7th century. Whether this phenomena was an "Indianization" or the "influx of elements of Indian culture," it was closely related to the matter of 'localization.' The regional character of each local region in Southeast Asia is partially shown after the 8th century. However it is not clear whether this culture was settled in each region as its dominant culture. The localization of the Indian culture in Southeast Asia which acted as a network connecting ports or cities was a part of the process of localization of Indian culture in pan-Southeast Asian region, and the process of the building of the basis for establishing an identity for each Southeast Asian region.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.

A Study on the Use of GIS-based Time Series Spatial Data for Streamflow Depletion Assessment (하천 건천화 평가를 위한 GIS 기반의 시계열 공간자료 활용에 관한 연구)

  • YOO, Jae-Hyun;KIM, Kye-Hyun;PARK, Yong-Gil;LEE, Gi-Hun;KIM, Seong-Joon;JUNG, Chung-Gil
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.50-63
    • /
    • 2018
  • The rapid urbanization had led to a distortion of natural hydrological cycle system. The change in hydrological cycle structure is causing streamflow depletion, changing the existing use tendency of water resources. To manage such phenomena, a streamflow depletion impact assessment technology to forecast depletion is required. For performing such technology, it is indispensable to build GIS-based spatial data as fundamental data, but there is a shortage of related research. Therefore, this study was conducted to use the use of GIS-based time series spatial data for streamflow depletion assessment. For this study, GIS data over decades of changes on a national scale were constructed, targeting 6 streamflow depletion impact factors (weather, soil depth, forest density, road network, groundwater usage and landuse) and the data were used as the basic data for the operation of continuous hydrologic model. Focusing on these impact factors, the causes for streamflow depletion were analyzed depending on time series. Then, using distributed continuous hydrologic model based DrySAT, annual runoff of each streamflow depletion impact factor was measured and depletion assessment was conducted. As a result, the default value of annual runoff was measured at 977.9mm under the given weather condition without considering other factors. When considering the decrease in soil depth, the increase in forest density, road development, and groundwater usage, along with the change in land use and development, and annual runoff were measured at 1,003.5mm, 942.1mm, 961.9mm, 915.5mm, and 1003.7mm, respectively. The results showed that the major causes of the streaflow depletion were lowered soil depth to decrease the infiltration volume and surface runoff thereby decreasing streamflow; the increased forest density to decrease surface runoff; the increased road network to decrease the sub-surface flow; the increased groundwater use from undiscriminated development to decrease the baseflow; increased impervious areas to increase surface runoff. Also, each standard watershed depending on the grade of depletion was indicated, based on the definition of streamflow depletion and the range of grade. Considering the weather, the decrease in soil depth, the increase in forest density, road development, and groundwater usage, and the change in land use and development, the grade of depletion were 2.1, 2.2, 2.5, 2.3, 2.8, 2.2, respectively. Among the five streamflow depletion impact factors except rainfall condition, the change in groundwater usage showed the biggest influence on depletion, followed by the change in forest density, road construction, land use, and soil depth. In conclusion, it is anticipated that a national streamflow depletion assessment system to be develop in the future would provide customized depletion management and prevention plans based on the system assessment results regarding future data changes of the six streamflow depletion impact factors and the prospect of depletion progress.

Recent Research for the Seismic Activities and Crustal Velocity Structure (국내 지진활동 및 지각구조 연구동향)

  • Kim, Sung-Kyun;Jun, Myung-Soon;Jeon, Jeong-Soo
    • Economic and Environmental Geology
    • /
    • v.39 no.4 s.179
    • /
    • pp.369-384
    • /
    • 2006
  • Korean Peninsula, located on the southeastern part of Eurasian plate, belongs to the intraplate region. The characteristics of intraplate earthquake show the low and rare seismicity and the sparse and irregular distribution of epicenters comparing to interplate earthquake. To evaluate the exact seismic activity in intraplate region, long-term seismic data including historical earthquake data should be archived. Fortunately the long-term historical earthquake records about 2,000 years are available in Korea Peninsula. By the analysis of this historical and instrumental earthquake data, seismic activity was very high in 16-18 centuries and is more active at the Yellow sea area than East sea area. Comparing to the high seismic activity of the north-eastern China in 16-18 centuries, it is inferred that seismic activity in two regions shows close relationship. Also general trend of epicenter distribution shows the SE-NW direction. In Korea Peninsula, the first seismic station was installed at Incheon in 1905 and 5 additional seismic stations were installed till 1943. There was no seismic station from 1945 to 1962, but a World Wide Standardized Seismograph was installed at Seoul in 1963. In 1990, Korean Meteorological Adminstration(KMA) had established centralized modem seismic network in real-time, consisted of 12 stations. After that time, many institutes tried to expand their own seismic networks in Korea Peninsula. Now KMA operates 35 velocity-type seismic stations and 75 accelerometers and Korea Institute of Geoscience and Mineral Resources operates 32 and 16 stations, respectively. Korea Institute of Nuclear Safety and Korea Electric Power Research Institute operate 4 and 13 stations, consisted of velocity-type and accelerometer. In and around the Korean Peninsula, 27 intraplate earthquake mechanisms since 1936 were analyzed to understand the regional stress orientation and tectonics. These earthquakes are largest ones in this century and may represent the characteristics of earthquake in this region. Focal mechanism of these earthquakes show predominant strike-slip faulting with small amount of thrust components. The average P-axis is almost horizontal ENE-WSW. In north-eastern China, strike-slip faulting is dominant and nearly horizontal average P-axis in ENE-WSW is very similar with the Korean Peninsula. On the other hand, in the eastern part of East Sea, thrust faulting is dominant and average P-axis is horizontal with ESE-WNW. This indicate that not only the subducting Pacific Plate in east but also the indenting Indian Plate controls earthquake mechanism in the far east of the Eurasian Plate. Crustal velocity model is very important to determine the hypocenters of the local earthquakes. But the crust model in and around Korean Peninsula is not clear till now, because the sufficient seismic data could not accumulated. To solve this problem, reflection and refraction seismic survey and seismic wave analysis method were simultaneously applied to two long cross-section traversing the southern Korean Peninsula since 2002. This survey should be continuously conducted.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Packaging Technology for the Optical Fiber Bragg Grating Multiplexed Sensors (광섬유 브래그 격자 다중화 센서 패키징 기술에 관한 연구)

  • Lee, Sang Mae
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.24 no.4
    • /
    • pp.23-29
    • /
    • 2017
  • The packaged optical fiber Bragg grating sensors which were networked by multiplexing the Bragg grating sensors with WDM technology were investigated in application for the structural health monitoring of the marine trestle structure transporting the ship. The optical fiber Bragg grating sensor was packaged in a cylindrical shape made of aluminum tubes. Furthermore, after the packaged optical fiber sensor was inserted in polymeric tube, the epoxy was filled inside the tube so that the sensor has resistance and durability against sea water. The packaged optical fiber sensor component was investigated under 0.2 MPa of hydraulic pressure and was found to be robust. The number and location of Bragg gratings attached at the trestle were determined where the trestle was subject to high displacement obtained by the finite element simulation. Strain of the part in the trestle being subjected to the maximum load was analyzed to be ${\sim}1000{\mu}{\varepsilon}$ and thus shift in Bragg wavelength of the sensor caused by the maximum load of the trestle was found to be ~1,200 pm. According to results of the finite element analysis, the Bragg wavelength spacings of the sensors were determined to have 3~5 nm without overlapping of grating wavelengths between sensors when the trestle was under loads and thus 50 of the grating sensors with each module consisting of 5 sensors could be networked within 150 nm optical window at 1550 nm wavelength of the Bragg wavelength interrogator. Shifts in Bragg wavelength of the 5 packaged optical fiber sensors attached at the mock trestle unit were well interrogated by the grating interrogator which used the optical fiber loop mirror, and the maximum strain rate was measured to be about $235.650{\mu}{\varepsilon}$. The modelling result of the sensor packaging and networking was in good agreements with experimental result each other.

A Study on the Application of Block Chain Technology on EVMS (EVMS 업무의 블록체인 기술 적용 방안 연구)

  • Kim, Il-Han;Kwon, Sun-Dong
    • Management & Information Systems Review
    • /
    • v.39 no.2
    • /
    • pp.39-60
    • /
    • 2020
  • Block chain technology is one of the core elements for realizing the 4th industrial revolution, and many efforts have been made by government and companies to provide services based on block chain technology. In this study we analyzed the benefits of block chain technology for EVMS and designed EVMS block chain platform with increased data security and work efficiency for project management data, which are important assets in monitoring progress, foreseeing future events, and managing post-completion. We did the case studies on the benefits of block chain technology and then conducted the survey study on security, reliability, and efficiency of block chain technology, targeting 18 block chain experts and project developers. And then, we interviewed EVMS system operator on the compatibility between block chain technology and EVM Systems. The result of the case studies showed that block chain technology can be applied to financial, logistic, medical, and public services to simplify the insurance claim process and to improve reliability by distributing transaction data storage and applying security·encryption features. Also, our research on the characteristics and necessity of block chain technology in EVMS revealed the improvability of security, reliability, and efficiency of management and distribution of EVMS data. Finally, we designed a network model, a block structure, and a consensus algorithm model and combined them to construct a conceptual block chain model for EVM system. This study has the following contribution. First, we reviewed that the block chain technology is suitable for application in the defense sector and proposed a conceptual model. Second, the effect that can be obtained by applying block chain technology to EVMS was derived, and the possibility of improving the existing business process was derived.

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2016 (설비공학 분야의 최근 연구 동향 : 2016년 학회지 논문에 대한 종합적 고찰)

  • Lee, Dae-Young;Kim, Sa Ryang;Kim, Hyun-Jung;Kim, Dong-Seon;Park, Jun-Seok;Ihm, Pyeong Chan
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.29 no.6
    • /
    • pp.327-340
    • /
    • 2017
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2016. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) The research works on the thermal and fluid engineering have been reviewed as groups of flow, heat and mass transfer, the reduction of pollutant exhaust gas, cooling and heating, the renewable energy system and the flow around buildings. CFD schemes were used more for all research areas. (2) Research works on heat transfer area have been reviewed in the categories of heat transfer characteristics, pool boiling and condensing heat transfer and industrial heat exchangers. Researches on heat transfer characteristics included the results of the long-term performance variation of the plate-type enthalpy exchange element made of paper, design optimization of an extruded-type cooling structure for reducing the weight of LED street lights, and hot plate welding of thermoplastic elastomer packing. In the area of pool boiling and condensing, the heat transfer characteristics of a finned-tube heat exchanger in a PCM (phase change material) thermal energy storage system, influence of flow boiling heat transfer on fouling phenomenon in nanofluids, and PCM at the simultaneous charging and discharging condition were studied. In the area of industrial heat exchangers, one-dimensional flow network model and porous-media model, and R245fa in a plate-shell heat exchanger were studied. (3) Various studies were published in the categories of refrigeration cycle, alternative refrigeration/energy system, system control. In the refrigeration cycle category, subjects include mobile cold storage heat exchanger, compressor reliability, indirect refrigeration system with $CO_2$ as secondary fluid, heat pump for fuel-cell vehicle, heat recovery from hybrid drier and heat exchangers with two-port and flat tubes. In the alternative refrigeration/energy system category, subjects include membrane module for dehumidification refrigeration, desiccant-assisted low-temperature drying, regenerative evaporative cooler and ejector-assisted multi-stage evaporation. In the system control category, subjects include multi-refrigeration system control, emergency cooling of data center and variable-speed compressor control. (4) In building mechanical system research fields, fifteenth studies were reported for achieving effective design of the mechanical systems, and also for maximizing the energy efficiency of buildings. The topics of the studies included energy performance, HVAC system, ventilation, renewable energies, etc. Proposed designs, performance tests using numerical methods and experiments provide useful information and key data which could be help for improving the energy efficiency of the buildings. (5) The field of architectural environment was mostly focused on indoor environment and building energy. The main researches of indoor environment were related to the analyses of indoor thermal environments controlled by portable cooler, the effects of outdoor wind pressure in airflow at high-rise buildings, window air tightness related to the filling piece shapes, stack effect in core type's office building and the development of a movable drawer-type light shelf with adjustable depth of the reflector. The subjects of building energy were worked on the energy consumption analysis in office building, the prediction of exit air temperature of horizontal geothermal heat exchanger, LS-SVM based modeling of hot water supply load for district heating system, the energy saving effect of ERV system using night purge control method and the effect of strengthened insulation level to the building heating and cooling load.