Simulation Analysis of GPS Reception Environment of Unified Control Points Using GIS (GIS를 이용한 통합기준점의 GPS 수신환경 모의 분석)
-
- Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
- /
- v.35 no.6
- /
- pp.609-616
- /
- 2017
National Geographic Information Institute has established a plan that preoccupies UCPs (Unified Control Points) at 2~3km intervals in urban areas by considering the distance between existing UCPs by satellite images and aerial photographs in 2015. In this study, we discussed the method of selecting the locations of optimal UCPs by simulating GPS reception environment in candidate sites for UCPs using GIS. For this purpose, we selected new candidate sites for installing UCPs using satellite images and aerial photographs, and analyzed the GPS reception environment by calculating the visibility distance from buildings around UCPs using GIS skyline analysis. The number of and the arrangement of visible satellites that are capable of GPS satellite reception from the viewpoint of sky view were showed by GIS skyline analysis. Quality evaluation results of GPS observation data were compared with average PDOP calculated from hourly PDOP and TEQC in two points of Sungkyunkwan University during 8 hours. As a result of GPS reception environment using GIS, if the PDOP increases, the data acquisition rate is lowed, and the multipath error and the cycle slip are increased. Thus, this study verified that the quality of GPS observation data can be secured by constructing three-dimensional spatial information and simulating PDOP when preoccupying multiple UCPs using GIS.
With the recent development of smart grid industry, the necessity for efficient EMS(Energy Management System) has been increased. In particular, in order to reduce electric load and energy cost, sophisticated electric load forecasting and efficient smart grid operation strategy are required. In this paper, for more accurate electric load forecasting, we extend the data collected at demand time into high time resolution and construct an artificial neural network-based forecasting model appropriate for the high time resolution data. Furthermore, to improve the accuracy of electric load forecasting, time series data of sequence form are transformed into continuous data of two-dimensional space to solve that problem that machine learning methods cannot reflect the periodicity of time series data. In addition, to consider external factors such as temperature and humidity in accordance with the time resolution, we estimate their value at the time resolution using linear interpolation method. Finally, we apply the PCA(Principal Component Analysis) algorithm to the feature vector composed of external factors to remove data which have little correlation with the power data. Finally, we perform the evaluation of our model through 5-fold cross-validation. The results show that forecasting based on higher time resolution improve the accuracy and the best error rate of 3.71% was achieved at the 3-min resolution.
In this paper, design of a new 3-dimensional (3-D) 16-ary signal constellation with constant envelope is presented and analyzed. Unlike the conventional 16-ary constellations, all signal points of the new constellation are uniformly located on the surface of a sphere so that they have a unique amplitude level and a symmetrical structure. When average power of the constellations is normalized, the presented 16-ary constellation has around 11.4% increased minimum Euclidean distance (MED) as compared to the conventional ones that have non-constant envelope. As a result, a digital communication system which exploits the presented constellation has 1.2dB improved symbol error rate (SER). While signal points of the conventional constant-envelope constellation are not distributed uniformly on the surface of a sphere, those of the proposed constellation has a completely symmetric distribution. In addition, the new signal constellation has much lower computational complexity for practical implementation than the conventional one. Hence, the proposed 3-D 16-ary signal constellation is appropriate for the application to a communication system which strongly requires a constant-envelope characteristic.
In this paper, we present an inertial sensor-based gait analysis system to measure and analyze lower-limb movements. We developed an integral AHRS(Attitude Heading Reference System) using a combination of rate gyroscope, accelerometer and magnetometer sensor signals. Several AHRS modules mounted on segments of the patient's body provide the quaternions representing the patient segments's orientation in space. And a method is also proposed for calculating three-dimensional inter-segment joint angle which is an important bio-mechanical measure for a variety of applications related to rehabilitation. To evaluate the performance of our AHRS module, the Vicon motion capture system, which offers millimeter resolution of 3D spatial displacements and orientations, is used as a reference. The evaluation resulted in a RMSE(Root Mean Square Error) of 1.08 and 1.72 degree in yaw and pitch angle. In order to evaluate the performance of our the gait analysis system, we compared the joint angle for the hip, knee and ankle with those provided by Vicon system. The result shows that our system will provide an in-depth insight into the effectiveness, appropriate level of care, and feedback of the rehabilitation process by performing real-time limb or gait analysis during the post-stroke recovery.
Though, fishing vessel accidents account for 70 % of all maritime accidents in Korean waters, most research has focused on identifying causes and developing mitigation policies in an attempt to reduce this rate. However, predicting and evaluating accident risk needs to be done before the implementation of such reduction measures. For this reasons, we havve performed a risk analysis to calculate the risk of accidents and propose a risk criteria matrix with 4 quadrants, within one of which forecasted risk is plotted for the relative comparison of risks. For this research, we considered 9 types of fishing vessel accidents as reported by Korea Maritime Safety Tribunal (KMST). Given that no risk evaluation criteria have been established in Korea, we established a two-dimensional frequency-consequence grid consisting of four quadrants into which paired frequency and consequence for each type of accident are presented. With the simple structure of the evaluation model, one can easily verify the effect of frequency and consequence on the resulting risk within each quadrant. Consequently, these risk evaluation results will help a decision maker employ more realistic risk mitigation measures for accident types situated in different quadrants. As an application of the risk evaluation matrix, accident types were further analyzed using accident causes including human error (factor) and appropriate risk reduction options may be established by comparing the relative frequency and consequence of each accident cause.
The one-dimensional solute transport models have been developed for recent decades to predict behavior and fate of solutes in rivers. Transient storage model (TSM) is the most popular model because of its simple conceptualization to consider the complexity of natural rivers. However, the TSM is highly dependent on its parameters which cannot be directly measured. In addition, the TSM interprets the late-time behavior of concentration curves in the shape of an exponential function, which has been evaluated as not suitable for actual solute behavior in natural rivers. In this study, we suggested a stochastic approach to the solute transport analysis. We delineated the model development and model application to a natural river, and compared the results of the proposed model to those of the TSM. To validate the proposed model, a tracer test was carried out in the 4.85 km reach of Gam Creek, one of the first-order tributaries of Nakdong River, South Korea. As a result of comparing the power-law slope of the tail of breakthrough curves, the simulation results from the stochastic storage model yielded the average error rate of 0.24, which is more accurate than the 14.03 and 1.87 from advection-dispersion model and TSM, respectively. This study demonstrated the appropriateness of the power-law residence time distribution to the hyporheic zone of the Gam Creek.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).
Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.