Correlation analysis of pollutants using IoT technology in LID facilities (LID 시설 내 IoT 기술을 활용한 오염물질 상관성 분석)
-
- Proceedings of the Korea Water Resources Association Conference
- /
- 2021.06a
- /
- pp.453-453
- /
- 2021
도시지역 비점오염원관리, 물순환 회복, 침투 및 증발산량 증가, 열섬현상 저감을 위한 주요한 방안으로 저영향개발(low impact development, LID)과 그린인프라 기법의 적용되고 있다. LID 시설은 소규모 분산형 시설로써 넓은 지역에 많고 다양한 시설들이 적용되어 시설의 개수가 많으며, 수질 및 토양 내 기성제품에 대한 센서들의 가격은 고가로 형성되어 있어 기기의 경제성 및 유지관리 등 적용하는데 제한적이다. 따라서 과거 모니터링 자료를 기반으로 오염물질들과의 상관성 분석을 통하여 계측이 어려운 항목들을 계측가능한 항목들로부터 예측 가능하며, 선정된 항목들에 대한 비용효율적인 센서를 개발하여 실시간 LID 모니터링이 가능한 비용효율적 모니터링을 개발하였다. 공주대학교 천안캠퍼스의 LID 시설들은 2013년에 조성되어 현재까지 시설이 운영되고 있으며, 5년이상의 과거 강우시 모니터링 자료들을 이용하여 오염물질 상관성 분석을 수행가능 하기에 대상지로 선정하였다. 오염물질 상관성 분석은 2013년부터 2017년도에 침투도랑에서 수행된 강우시 모니터링 자료를 활용하여 각 오염물질들의 상관성을 분석을 수행하였다. 침투도랑 내 유입되는 평균 유입수는 TSS 286.1±318.3 mg/L, BOD 22.6±39.5 mg/L, TN 8.96±5.85 mg/L, TP 1.01±1.11 mg/L로 나타났다. 겨울철에 비해 여름철에서의 오염물질의 유입농도가 높은 것으로 분석되었다. 이는 여름철 고온건조로 인한 노면 내 차량의 주행으로 인한 중금속, 폐타이어 등과 장마철 강우 시 유출된 토사로 인하여 유입수의 농도가 높은 것으로 분석되었다. 오염물질 부하량은 TSS와 COD 0.66으로 유의성이 높은 것으로 나왔으며, COD와 TSS, TP, TN 등 유의성이 높은 것으로 분석되었다. Arduino와 Raspberry PI를 활용하여 저비용 센서와 LTE 모뎀통신과 데이터 베이스 연결하여 개발된 프로그램을 통해서 무선으로 LID 시설에 대한 모니터링을 침투화분2와 식생체류지에 조성하였다. 전력공급이 어려운 식생체류지의 경우 태양열(Solar system) 시스템과 보조 전력 배터리를 조성하여 장마철이나 장기적인 악천후로 인한 전력을 생산하지 못할 경우 보조전력배터리에서 전력을 제공하여 지속적인 모니터링이 이루어지도록 설계하였다. 토양함수량, 토양온도와 Conductivity 등 3종류의 센서를 적용하였으며, 프로그램은 현재 2단계를 통한 2차수정을 통하여 프로그램을 구축하였다. 오차, 오작동, 계측값에 대한 검·보정 작업이 필요하다. 또한 대기자료의 구축을 통해 보다 토양과 LID 시설에 대한 영향분석이 필요한 것으로 사료된다.
Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.
Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.
The purpose of this study is to offer data base for establishment of dental training x-ray generator based safety usage through surveying real radiation safety management state of radiation worker's in plan of operations that have dental training x-ray generators and use it. For it, comprehensive references were surveyed referring reports of current state of regulation technique development and domestic radiation safety evaluation and nuclear related legislation regarding radiation safety management of dental training x-ray generators. On the basis of it, questionnaires were filled in about respondent's general characteristic radiation safety manager's status current state of radiation safety management and the level of knowledge & consciousness. For the study, the survey was conducted to 224 people of radiation safety managers and university graduates training assistants and full-time professors who can treat dental training x-ray generators in education center. through this survey 95 questionnaires were used as analysis materials except the insufficient and omitted responses. As a method of analysis, the frequency and percentage were figured out with the general characteristics and safety manager's status. Chi-square test for frequency and correlation per question analysis and Pearson correlation analysis for crosslevel correlation were done with current state of radiation safety management and knowledge & consciousness level. As a result, running dental training x-ray generators was dealt with by 20's to 40's who have high education level over post undergraduate degree and major in dental hygienic. In addition, female have higher consciousness level for radiation safety management than male. It shows significal linear relation statistically(
Soybean is one of the most important crops of which the grains contain high protein content and has been consumed in various forms of food. Soybean plants are generally cultivated on the field and their yield and quality are strongly affected by climate change. Recently, the abnormal climate conditions, including heat wave and heavy rainfall, frequently occurs which would increase the risk of the farm management. The real-time assessment techniques for quality and growth of soybean would reduce the losses of the crop in terms of quantity and quality. The objective of this work was to develop a simple model to estimate the growth of soybean plant using a multispectral sensor mounted on a rotor-wing unmanned aerial vehicle(UAV). The soybean growth model was developed by using simple linear regression analysis with three phenotypic data (fresh weight, dry weight, leaf area index) and two types of vegetation indices (VIs). It was found that the accuracy and precision of LAI model using GNDVI (R2= 0.789, RMSE=0.73 ㎡/㎡, RE=34.91%) was greater than those of the model using NDVI (R2= 0.587, RMSE=1.01 ㎡/㎡, RE=48.98%). The accuracy and precision based on the simple ratio indices were better than those based on the normalized vegetation indices, such as RRVI (R2= 0.760, RMSE=0.78 ㎡/㎡, RE=37.26%) and GRVI (R2= 0.828, RMSE=0.66 ㎡/㎡, RE=31.59%). The outcome of this study could aid the production of soybeans with high and uniform quality when a variable rate fertilization system is introduced to cope with the adverse climate conditions.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
In this paper, I consider the development of methods in contemporary human geography in terms of a dialectical relation of action and structure, and try to draw a new horizon of method toward which geographical research and spatial theory would develop. The positivist geography which was dominent during 1960s has been faced both with serious internal reflections and strong external criticisms in the 1970s. The internal reflections that pointed out its ignorance of spatial behavior of decision-makers and its simplication of complex spatial relations have developed behavioural geography and systems-theoretical approach. Yet this kinds of alternatives have still standed on the positivist, geography, even though they have seemed to be more real and complicate than the previous one, The external criticisms that have argued against the positivist method as phenomenalism and instrumentalism suggest some alternatives: humanistic geography which emphasizes intention and action of human subject and meaning-understanding, and structuralist geography which stresses on social structure as a totality which would produce spatial phenomena, and a theoretical formulation. Human geography today can be characterized by a strain and conflict between these methods, and hence rezuires a synthetic integration between them. Philosophy and social theory in general are in the same in which theories of action and structural analysis have been complementary or conflict with each other. Human geography has fallen into a further problematic with the introduction of a method based on so-called political ecnomy. This method has been suggested not merely as analternative to the positivist geography, but also as a theoretical foundation for critical analysis of space. The political economy of space with has analyzed the capitalist space and tried to theorize its transformation may be seen either as following humanistic(or Hegelian) Marxism, such as represented in Lefebvre's work, or as following structuralist Marxism, such as developed in Castelles's or Harvey's work. The spatial theory following humanistic Marxism has argued for a dialectic relation between 'the spatial' and 'the social', and given more attention to practicing human agents than to explaining social structures. on the contray, that based on structuralist Marxism has argued for social structures producing spatial phenomena, and focused on theorising the totality of structures, Even though these two perspectives tend more recently to be convergent in a way that structuralist-Marxist. geographers relate the domain of economic and political structures with that of action in their studies of urban culture and experience under capitalism, the political ecnomy of space needs an integrated method with which one can overcome difficulties of orthhodox Marxism. Some novel works in philosophy and social theory have been developed since the end of 1970s which have oriented towards an integrated method relating a series of concepts of action and structure, and reconstructing historical materialism. They include Giddens's theory of structuration, foucault's geneological analysis of power-knowledge, and Habermas's theory of communicative action. Ther are, of course, some fundamental differences between these works. Giddens develops a theory which relates explicitly the domain of action and that of structure in terms of what he calls the 'duality of structure', and wants to bring time-space relations into the core of social theory. Foucault writes a history in which strategically intentional but nonsubjective power relations have emerged and operated by virtue of multiple forms of constrainst wihthin specific spaces, while refusing to elaborate any theory which would underlie a political rationalization. Habermas analyzes how the Western rationalization of ecnomic and political systems has colonized the lifeworld in which we communicate each other, and wants to formulate a new normative foundation for critical theory of society which highlights communicatie reason (without any consideration of spatial concepts). On the basis of the above consideration, this paper draws a new norizon of method in human geography and spatial theory, some essential ideas of which can be summarized as follows: (1) the concept of space especially in terms of its relation to sociery. Space is not an ontological entity whch is independent of society and has its own laws of constitution and transformation, but it can be produced and reproduced only by virtue of its relation to society. Yet space is not merlely a material product of society, but also a place and medium in and through which socety can be maintained or transformed.(2) the constitution of space in terms of the relation between action and structure. Spatial actors who are always knowledgeable under conditions of socio-spatial structure produce and reproduce their context of action, that is, structure; and spatial structures as results of human action enable as well as constrain it. Spatial actions can be distinguished between instrumental-strategicaction oriented to success and communicative action oriented to understanding, which (re)produce respectively two different spheres of spatial structure in different ways: the material structure of economic and political systems-space in an unknowledged and unitended way, and the symbolic structure of social and cultural life-space in an acknowledged and intended way. (3) the capitalist space in terms of its rationalization. The ideal development of space would balance the rationalizations of system space and life-space in a way that system space providers material conditions for the maintainance of the life-space, and the life-space for its further development. But the development of capitalist space in reality is paradoxical and hence crisis-ridden. The economic and poltical system-space, propelled with the steering media like money, and power, has outstriped the significance of communicative action, and colonized the life-space. That is, we no longer live in a space mediated communicative action, but one created for and by money and power. But no matter how seriously our everyday life-space has been monetalrized and bureaucratised, here lies nevertheless the practical potential which would rehabilitate the meaning of space, the meaning of our life on the Earth.
In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.
Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.