Accuracy Analysis of ADCP Stationary Discharge Measurement for Unmeasured Regions (ADCP 정지법 측정 시 미계측 영역의 유량 산정 정확도 분석)
-
- Journal of Korea Water Resources Association
- /
- v.48 no.7
- /
- pp.553-566
- /
- 2015
Acoustic Doppler Current Profilers(ADCPs) have capability to concurrently capitalize three-dimensional velocity vector and bathymetry with highly efficient and rapid manner, and thereby enabling ADCPs to document the hydrodynamic and morphologic data in very high spatial and temporal resolution better than other contemporary instruments. However, ADCPs are also limited in terms of the inevitable unmeasured regions near bottom, surface, and edges of a given cross-section. The velocity in those unmeasured regions are usually extrapolated or assumed for calculating flow discharge, which definitely affects the accuracy in the discharge assessment. This study aimed at scrutinizing a conventional extrapolation method(i.e., the 1/6 power law) for estimating the unmeasured regions to figure out the accuracy in ADCP discharge measurements. For the comparative analysis, we collected spatially dense velocity data using ADV as well as stationary ADCP in a real-scale straight river channel, and applied the 1/6 power law for testing its applicability in conjunction with the logarithmic law which is another representative velocity law. As results, the logarithmic law fitted better with actual velocity measurement than the 1/6 power law. In particular, the 1/6 power law showed a tendency to underestimate the velocity in the near surface region and overestimate in the near bottom region. This finding indicated that the 1/6 power law could be unsatisfactory to follow actual flow regime, thus that resulted discharge estimates in both unmeasured top and bottom region can give rise to discharge bias. Therefore, the logarithmic law should be considered as an alternative especially for the stationary ADCP discharge measurement. In addition, it was found that ADCP should be operated in at least more than 0.6 m of water depth in the left and right edges for better estimate edge discharges. In the future, similar comparative analysis might be required for the moving boat ADCP discharge measurement method, which has been more widely used in the field.
This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.
Cyber-based technologies are now ubiquitous around the glob and are emerging as an "instrument of power" in societies, and are becoming more available to a country's opponents, who may use it to attack, degrade, and disrupt communications and the flow of information. The globe-spanning range of cyberspace and no national borders will challenge legal systems and complicate a nation's ability to deter threats and respond to contingencies. Through cyberspace, competitive powers will target industry, academia, government, as well as the military in the air, land, maritime, and space domains of our nations. Enemies in cyberspace will include both states and non-states and will range from the unsophisticated amateur to highly trained professional hackers. In much the same way that airpower transformed the battlefield of World War II, cyberspace has fractured the physical barriers that shield a nation from attacks on its commerce and communication. Cyberthreats to the infrastructure and other assets are a growing concern to policymakers. In 2013 Cyberwarfare was, for the first time, considered a larger threat than Al Qaeda or terrorism, by many U.S. intelligence officials. The new United States military strategy makes explicit that a cyberattack is casus belli just as a traditional act of war. The Economist describes cyberspace as "the fifth domain of warfare and writes that China, Russia, Israel and North Korea. Iran are boasting of having the world's second-largest cyber-army. Entities posing a significant threat to the cybersecurity of critical infrastructure assets include cyberterrorists, cyberspies, cyberthieves, cyberwarriors, and cyberhacktivists. These malefactors may access cyber-based technologies in order to deny service, steal or manipulate data, or use a device to launch an attack against itself or another piece of equipment. However because the Internet offers near-total anonymity, it is difficult to discern the identity, the motives, and the location of an intruder. The scope and enormity of the threats are not just focused to private industry but also to the country's heavily networked critical infrastructure. There are many ongoing efforts in government and industry that focus on making computers, the Internet, and related technologies more secure. As the national intelligence institution's effort, cyber counter-intelligence is measures to identify, penetrate, or neutralize foreign operations that use cyber means as the primary tradecraft methodology, as well as foreign intelligence service collection efforts that use traditional methods to gauge cyber capabilities and intentions. However one of the hardest issues in cyber counterintelligence is the problem of "Attribution". Unlike conventional warfare, figuring out who is behind an attack can be very difficult, even though the Defense Secretary Leon Panetta has claimed that the United States has the capability to trace attacks back to their sources and hold the attackers "accountable". Considering all these cyber security problems, this paper examines closely cyber security issues through the lessons from that of U.S experience. For that purpose I review the arising cyber security issues considering changing global security environments in the 21st century and their implications to the reshaping the government system. For that purpose this study mainly deals with and emphasis the cyber security issues as one of the growing national security threats. This article also reviews what our intelligence and security Agencies should do among the transforming cyber space. At any rate, despite of all hot debates about the various legality and human rights issues derived from the cyber space and intelligence service activity, the national security should be secured. Therefore, this paper suggests that one of the most important and immediate step is to understanding the legal ideology of national security and national intelligence.
Bioslurping combines the three remedial approaches of bioventing, vacuum-enhanced free-product recovery, and soil vapor extraction. Bioslurping is less effective in tight (low-permeability) soils. The greatest limitation to air permeability is excessive soil moisture. Optimum soil moisture is very soil-specific. Too much moisture can reduce air permeability of the soil and decrease its oxygen transfer capability. Too little moisture will inhibit microbial activity. So Modified Fenton reaction as chemical treatment which can overcome the weakness of Bioslurping was experimented for simultaneous treatment. Although the diesel removal efficiency of SVE process increased in proportion to applied vacuum pressure, SVE process was difficulty to remediation quickly semi- or non-volatile compounds absorbed soil strongly. And SVE process had variation of efficiency with distance from the extraction well and depth a air flow form of hemisphere centering around the well. Below 0.1 % hydrogen peroxide shows the potential of using hydrogen peroxide as oxygen source but the co-oxidation of chemical and biological treatment was impossible because of the low efficiency of Modified Fenton reaction at 0.1 % (wt) hydrogen peroxide. NTA was more efficiency than EDTA as chelating agent and diesel removal efficiency of Modified Fenton reaction increased in proportion to hydrogen peroxide concentration. Hexadecane as typical aliphatic compound was removed less than Toluene as aromatic compound because of its structural stability in Modified Fenton reaction. What minimum 10% hydrogen peroxide concentration has good remediation efficiency of diesel contaminated groundwater may show the potential use of Modified Fenton reaction after bioslurping treatment.
The objective of this paper is to reveal that the formation of Deleuze's system is a result of a back flow of the 'ideal of pure reason' in Kant's system. I will try to seize upon the keyword in his main book, Difference and Repetition, and examine the aspect of mutual transformation between Deleuze's transcendental empiricism and Kant's transcendentalism. When analyzing Deleuze's system, most researchers tend to focus on anti-Hegelianism, but it is proper that Kant be adopted as the start when tracing the way of deployment directly. Fundamentally, Deleuze is different from Hegel in his approach to observing entire ground of thought. Even if Deleuze surely has the capability of becoming in the dialectical context, his systemic environment wherein dialectics is applied is different even at the onset. While Hegel follows the way of origin and copy or a system that begins from a preceding point of origin, Deleuze follows a way of copy and recopy or a system that begins without a point of origin. This characteristic of Deleuze's system originates directly from idealistic play. In fact, we can anticipate and identify in his book that he refers to Kant who accepted the tradition of empiricism. Therefore, the main contents of this paper is to present an overview of Kant's influence on Deleuze's system. While tracing ideas back to Kant's system, the cohabitation of empiricism and rationalism, which Kant felicitously revoiced, there emerges a definitude of world recognition. This occurs through cohabitation, and this is both deconstructed and integrated by Deleuze, and therein definitude is turned into a vision of prosperity. To the vision of prosperity that spans definitude to recognition, a philosopher has the right to select a philosophical system because selection methodology in philosophy is not a problem of legitimacy so much as the needs of the times. Deleuze's choice resulted in the opening of pandora's box in an abyss and secret contents have in turn risen sharply.
In the flood season, the measurement of the river discharge has many restrictions due to reasons such as budget, manpower, safety, convenience in measurement and so on. In particular, when heavy rain events occur due to typhoons, etc., it is difficult to measure the amount of flood due to the above problems. In order to improve this problem, in this study, a method was developed that can measure the river discharge in a flood season simply and safely in a short time with minimal manpower by combining the functions of a drone and a surface velocity doppler radar. To overcome the mechanical limitations of drones caused by weather issues such as wind and rainfall derived from the measurement of the river discharge using the conventional drone, we developed a drone with P56 grade dustproof and waterproof performance, stable flight capability at a wind speed of up to 36 km/h, and a payload weight of up to 10 kg. Further, to eliminate vibration which is the most important constraint factor in the measurement with a surface velocity doppler radar, a damper plate was developed as a device that combines a drone and a surface velocity Doppler radar. The velocity meter DSVM (Dron and Surface Veloctity Meter using doppler radar) that combines the flight equipment with the velocity meter was produced. The error of ±3.5% occurred as a result of measuring the river discharge using DSVM at the point of Geumsan-gun (Hwangpunggyo) located at Bonghwang stream (the first tributary stream of the Geum River). In addition, when calculating the mean velocity from the measured surface velocity, the measurement was performed using ADCP simultaneously to improve accuracy, and the mean velocity conversion factor (0.92) was calculated by comparing the mean velocity. In this study, the discharge measured by combining a drone and a surface velocity meter was compared with the discharge measured using ADCP and floats, so that the application and utility of DSVM was confirmed.
The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot
Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70