• Title/Summary/Keyword: short term time series

Search Result 388, Processing Time 0.026 seconds

Structure and Variation of Tidal Flat Temperature in Gomso Bay, West Coast of Korea (서해안 곰소만 갯벌 온도의 구조 및 변화)

  • Lee, Sang-Ho;Cho, Yang-Ki;You, Kwang-Woo;Kim, Young-Gon;Choi, Hyun-Yong
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.10 no.1
    • /
    • pp.100-112
    • /
    • 2005
  • Soil temperature was measured from the surface to 40 cm depth at three stations with different heights in tidal flat of Gomso Bay, west coast of Korea, for one month in every season 2004 to examine the thermal structure and the variation. Mean temperature in surface layer was higher in summer and lower in winter than in lower layer, reflecting the seasonal variation of vertically propagating structure of temperature by heating and cooling from the tidal flat surface. Standard deviation of temperature decreased from the surface to lower layer. Periodic variations of solar radiation energy and tide mainly caused short term variation of soil temperature, which was also intermittently influenced by precipitation and wind. Time series analysis showed the power spectral energy peaks at the periods of 24, 12 and 8 hours, and the strongest peak appeared at 24 hour period. These peaks can be interpreted as temperature waves forced by variations of solar radiation, diurnal tide and interaction of both variations, respectively. EOF analysis showed that the first and the second modes resolved 96% of variation of vertical temperature structure. The first mode was interpreted as the heating antl cooling from tidal flat surface and the second mode as the effect of phase lag produced by temperature wave propagation in the soil. The phase of heat transfer by 24 hour period wave, analyzed by cross spectrum, showed that mean phase difference of the temperature wave increased almost linearly with the soil depth. The time lags by the phase difference from surface to 10, 20 and 40cm were 3.2,6.5 and 9.8 hours, respectively. Vertical thermal diffusivity of temperature wave of 24 hour period was estimated using one dimensional thermal diffusion model. Average diffusivity over the soil depths and seasons resulted in $0.70{\times}10^{-6}m^2/s$ at the middle station and $0.57{\times}10^{-6}m^2/s$ at the lowest station. The depth-averaged diffusivity was large in spring and small in summer and the seasonal mean diffusivity vertically increased from 2 cm to 10 cm and decreased from 10 cm to 40 cm. Thermal propagation speeds were estimated by $8.75{\times}10^{-4}cm/s,\;3.8{\times}10{-4}cm/s,\;and\;1.7{\times}10^{-4}cm/s$ from 2 cm to 10 cm, 20 cm and 40 cm, respectively, indicating the speed reduction with depth increasing from the surface.

Distribution of Salinity and Temperature due to the Freshwater Discharge in the Yeongsan Estuary in the Summer of 201 (2010년 여름 담수방류에 의한 영산강 하구의 염분 및 수온 분포 변화)

  • Park, Hyo-Bong;Kang, Kiryong;Lee, Guan-Hong;Shin, Hyun-Jung
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.17 no.3
    • /
    • pp.139-148
    • /
    • 2012
  • The short-term variation of salinity and temperature in a dyked estuarine environment is mainly controlled by the freshwater discharge from the dyke. We examined the distribution of salinity and temperature by the freshwater discharge in the Yeongsan River estuary using the CTD data obtained from 8 stations through three surveys in June (weak discharge) and August (intensive discharge), 2010. During the weak discharge in June, the surface salinity showed 30-32.5 psu and its horizontal gradient was relatively high around Goha-do (0.25~0.32 psu/km). On the other hand, the salinity of the bottom layer was almost constant in the range of 33 psu. Water temperature ranged $19{\sim}21^{\circ}C$ and displayed higher gradient in north-south direction than the gradient of east-west direction. During the intensive freshwater discharge on August 12, the salinity dropped to 9~26 psu. The maximum horizontal gradient of surface salinity reached 3.8 psu/km in the north of Goha-do where the strong salinity front was formed, and the horizontal salinity gradient of bottom layer was 0.28 psu/km. The horizontal gradient of water temperature was $-0.45^{\circ}C/km$ in the surface and $-0.12^{\circ}C/km$ in the bottom with high surface temperature near the dyke and decreasing gradually to the river mouth. After 3 days of the intensive discharge ($3^{rd}$ survey), the surface salinity increased to 22~26 psu. However, there still existed relatively high horizontal gradient around Goha-do. In the mean time, the bottom salinity decreased to 26.5~27.5 psu, but its gradient was not big as much as the surface gradient. According to time series of CTD profile near the dyke, the discharged fresh water jetted down temporarily and then recovered gradually with the recovering speed of 0.4 m/hour for the discharge case of $13{\times}10^6$ ton. Due to the combined effects of freshwater discharge and surface heating during the summer of 2010, the Yeongsan estuary, in general, underwent intensified vertical stratification, which in turn caused the inhibition of vertical mixing, especially inside area of estuary. Based on the spatial distribution of salinity and temperature, the Yeongsan estuary can be divided into three regions: the Goha-do area with strong horizontal gradient of salinity and temperature, inner estuary from Goha-do to the dyke with low salinity, and outer estuary from Goha-do to the coasts with relatively high salinity.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Study on the Physical Properties of the Gamma Beam-Irradiated Teflon-FEP and PET Film (Teflon-FEP 와 PET Film 의 감마선 조사에 따른 물리적 특성에 관한 연구)

  • 김성훈;김영진;이명자;전하정;이병용
    • Progress in Medical Physics
    • /
    • v.9 no.1
    • /
    • pp.11-21
    • /
    • 1998
  • Circular metal electrodes were vacuum-deposited with chromium on the both sides of Teflon-FEP and PET film characteristic of electret and the physical properties of the two polymers were observed during an irradiation by gamma-ray from $\^$60/Co. With the onset of irradiation of output 25.0 cGy/min the induced current increased rapidly for 2 sec, reached a maximum, and subsequently decreased. A steady-state induced current was reached about in 60 second. The dielectric constant and conductivity of Teflon-FEP were changed from 2.15 to 18.0 and from l${\times}$l0$\^$-17/ to 1.57${\times}$10$\^$-13/ $\Omega$-$\^$-1/cm$\^$-1/, respectively. For PET the dielectric constant was changed from 3 to 18.3 and the conductivity from 10$\^$-17/ to 1.65${\times}$10$\^$-13/ $\Omega$-$\^$-1/cm$\^$-1/. The increase of the radiation-induced steady state current I$\^$c/, permittivity $\varepsilon$ and conductivity $\sigma$ with output(4.0 cGy/min, 8.5 cGy/min, 15.6 cGy/min, 19.3 cGy/min) was observed. A series of independent measurements were also performed to evaluate reproducibility and revealed less than 1% deviation in a day and 3% deviation in a long term. Charge and current showed the dependence on the interval between measurements, the smaller the interval was, the bigger the difference between initial reading and next reading was. At least in 20 minutes of next reading reached an initial value. It may indicate that the polymers were exhibiting an electret state for a while. These results can be explained by the internal polarization associated with the production of electron-hole pairs by secondary electrons, the change of conductivity and the equilibrium due to recombination etc. Heating to the sample made the reading value increase in a short time, it may be interpreted that the internal polarization was released due to heating and it contributed the number of charge carriers to increase when the samples was again irradiated. The linearity and reproducibility of the samples with the applied voltage and absorbed dose and a large amount of charge measured per unit volume compared with the other chambers give the feasibility of a radiation detector and make it possible to reduce the volume of a detector.

  • PDF

A Study on Estimating Optimal Tonnage of Coastal Cargo Vessels in Korea (우리나라 연안화물선의 적정선복량 추정에 관한 연구)

  • 이청환;이철영
    • Journal of the Korean Institute of Navigation
    • /
    • v.13 no.1
    • /
    • pp.21-53
    • /
    • 1989
  • In the past twenty years, there has been a rapid increase in the volume of traffic in Korea due to the Korean great growth of the Korean economy. Since transformation provides an infrastructure vital to economic growth, it becomes more and more an integral part of the Korea economy. The importance of coastal shipping stands out in particular, not only because of the expansion limit on the road network, but also because of saturation in the capacity of rail transportation. In spite of this increase and its importance, coastal shipping is falling behind partly because it is givenless emphasis than ocean-going shipping and other inland transportation systems and partly because of overcompetition due to excessive ship tonnage. Therefore, estimating and planning optimum ship tonnage is the first take to develop Korean coastal shipping. This paper aims to estimate the optimum coastal ship tonnage by computer simulation and finally to draw up plans for the ship tonnage balance according to supply and demand. The estimation of the optimum ship tonnage is peformed by the method of Origin -Destimation and time series analysis. The result are as follows : (1) The optimum ship tonnage in 1987 was 358, 680 DWT, which is 54% of the current ship tonnage (481 ships, 662, 664DWT) that is equal to the optimum ship tonnage in 1998. this overcapacity result is in excessive competition and financial difficulties in Korea coastal shipping. (2) The excessive ship tonnage can be broken down into ship types as follows : oil carrier 250, 926 DWT(350%), cement carrier 9, 977 DWT(119%), iron material/machinery carrier 25, 665 DWT(117%), general cargo carrier 17, 416DWT(112%). (3) the current total ship crew of 5, 079 is more than the verified optimally efficient figure of 3, 808 by 1271. (4) From the viewpoint of management strategy, it is necessary that excessive ship tonnage be reduced and uneconomic outdated vessels be broken up. And its found that the diversion into economically efficient fleets is urgently required in order to meet increasing annual rate in the amounts of cargo(23, 877DWT). (5) The plans for the ship tonnage balance according to supply and demand are as follows 1) The establishment of a legislative system for the arrangement of ship tonnage. This would involve; (a) The announcement of an optimum tonnage which guides the licensing of cargo vessels and ship tonnage supply. (b) The establishment of an organization that substantially arrangement tonnage in Korea coastal shipping. 2) The announcement of an optimum ship tonnage both per year and short-term that guides current tonnage supply plans. 3) The settlement of elastic tariffs resulting in the protect6ion of coastal shipping's share from other tonnage supply plans. 4) The settlement of elastic tariffs resulting in the protection of coastal shipping's share from other transportation systems. 4) Restriction of ocean-going vessels from participating in coastal shipping routes. 5) Business rationalization of coastal shipping company which reduces uneconomic outdated vessels and boosts the national economy. If we are to achieve these ends, the followings are prerequisites; I) Because many non-licensed vessels are actually operating and threatening the safe voyage of the others in Korea coastal routes, it is necessary that those ind of vessels be controlled and punished by the authorities. II) The supply of ship tonnage in Korean coastal routes should be predently monitored because most of the coastal vessels are to small to be diverted into ocean-going routes in case of excessive supply. III) Every ship type which is engaged in coastal shipping should be specialized according to the characteristics of its routes as soon possible.

  • PDF

Wind-and Rain-induced Variations of Water Column Structures and Dispersal Pattern of Suspended Particulate Matter (SPM) in Marian Cove, the South Shetland Islands, West Antarctica during the Austral Summer 2000 (서남극 남 쉐틀랜드 군도 마리안 소만에서 바람 및 강수에 의한 여름철 수층 구조의 변화와 부유물질 분산)

  • 유규철;윤호일;오재경;강천윤;김예동;배성호
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.8 no.4
    • /
    • pp.357-368
    • /
    • 2003
  • Time-series CTDT (Conductivity/Temperature/Depth/Transmissivity) were obtained at one point near tidewater glacier of Marian Cove (King George Islands, Antarctica) to present water column properties and SPM (suspended particulate matter) dispersal pattern in relation with tide, current, meteorological data, and SPM concentration. Four layers were divided from the water column characteristics measured in the interval of an hour for about 2 days: 1) cold, fresh, and turbid surface mixed layer between 0-20 m in water depth, 2) warm, saline, and relatively clean Maxwell Bay inflow between 20-40 m in water depth, 3) turbid/cold tongue of subglacial discharges compared with the ambient waters between 40-70 m in water depth, and 4) cold, saline, and clean bottom water beneath 70 m in water depth. Surface plume, turbid freshwater at coastal/cliff area in late summer (early February), had the characteristic temperature and SPM concentration according to morphology, glacial condition, and composition of sediments. The restrict dispersion only over the input source of meltwater discharges was due to calm wether condition. Due to strong wind-induced surface turbulence, fresh and turbid surface plume, englacial upwelling cold water, glacier-contact meltwater, and Maxwell Bay inflow was mixing at ice-proximal zone and the consequent mixed layer deepened at the surface. Large amount of precipitation, the major controlling factor for increasing short-term glacial discharges, was accompanied by the apparent development of subglacial discharge that resulted in the rapid drop of salinity below the mid depth. Although amount of subglacial discharge and englacial upwelling may be large, however, their low SPM concentration would have small influence on bottom deposition of terrigenous sediments.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.