• Title/Summary/Keyword: 공간데이터 처리

Search Result 1,804, Processing Time 0.031 seconds

A Study on Low-Light Image Enhancement Technique for Improvement of Object Detection Accuracy in Construction Site (건설현장 내 객체검출 정확도 향상을 위한 저조도 영상 강화 기법에 관한 연구)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.208-217
    • /
    • 2024
  • There is so much research effort for developing and implementing deep learning-based surveillance systems to manage health and safety issues in construction sites. Especially, the development of deep learning-based object detection in various environmental changes has been progressing because those affect decreasing searching performance of the model. Among the various environmental variables, the accuracy of the object detection model is significantly dropped under low illuminance, and consistent object detection accuracy cannot be secured even the model is trained using low-light images. Accordingly, there is a need of low-light enhancement to keep the performance under low illuminance. Therefore, this paper conducts a comparative study of various deep learning-based low-light image enhancement models (GLADNet, KinD, LLFlow, Zero-DCE) using the acquired construction site image data. The low-light enhanced image was visually verified, and it was quantitatively analyzed by adopting image quality evaluation metrics such as PSNR, SSIM, Delta-E. As a result of the experiment, the low-light image enhancement performance of GLADNet showed excellent results in quantitative and qualitative evaluation, and it was analyzed to be suitable as a low-light image enhancement model. If the low-light image enhancement technique is applied as an image preprocessing to the deep learning-based object detection model in the future, it is expected to secure consistent object detection performance in a low-light environment.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Backward Path Tracking Control of a Trailer Type Robot Using a RCGS-Based Model (RCGA 기반의 모델을 이용한 트레일러형 로봇의 후방경로 추종제어)

  • Wi, Yong-Uk;Kim, Heon-Hui;Ha, Yun-Su;Jin, Gang-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.717-722
    • /
    • 2001
  • This paper presents a methodology on the backward path tracking control of a trailer type robot which consists of two parts: a tractor and a trailer. It is difficult to control the motion of a trailer vehicle since its dynamics is non-holonomic. Therefore, in this paper, the modeling and parameter estimation of the system using a real-coded genetic algorithm(RCGA) is proposed and a backward path tracking control algorithm is then obtained based on the linearized model. Experimental results verify the effectiveness of the proposed method.

  • PDF

Evaluation of Road and Traffic Information Use Efficiency on Changes in LDM-based Electronic Horizon through Microscopic Simulation Model (미시적 교통 시뮬레이션을 활용한 LDM 기반 도로·교통정보 활성화 구간 변화에 따른 정보 이용 효율성 평가)

  • Kim, Hoe Kyoung;Chung, Younshik;Park, Jaehyung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.231-238
    • /
    • 2023
  • Since there is a limit to the physically visible horizon that sensors for autonomous driving can perceive, complementary utilization of digital map data such as a Local Dynamic Map (LDM) along the probable route of an Autonomous Vehicle (AV) is proposed for safe and efficient driving. Although the amount of digital map data may be insignificant compared to the amount of information collected from the sensors of an AV, efficient management of map data is inevitable for the efficient information processing of AVs. The objective of this study is to analyze the efficiency of information use and information processing time of AV according to the expansion of the active section of LDM-based static road and traffic information. To carry out this objective, a microscopic simulator model, VISSIM and VISSIM COM, was employed, and an area of about 9 km × 13 km was selected in the Busan Metropolitan Area, which includes heterogeneous traffic flows (i.e., uninterrupted and interrupted flows) as well as various road geometries. In addition, the LDM information used in AVs refers to the real high-definition map (HDM) built on the basis of ISO 22726-1. As a result of the analysis, as the electronic horizon area increases, while short links are intensively recognized on interrupted urban roads and the sum of link lengths increases as well, the number of recognized links is relatively small on uninterrupted traffic road but the sum of link lengths is large due to a small number of long links. Therefore, this study showed that an efficient range of electronic horizon for HDM data collection, processing, and management are set as 600 m on interrupted urban roads considering the 12 links corresponding to three downstream intersections and 700 m on uninterrupted traffic road associated with the 10 km sum of link lengths, respectively.

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Back Pressure Dissipation Techniques of Land Slope Using Volcanic Rocks (화산석을 이용한 절.성토사면의 배수압 소산기법)

  • Jang, Kwang-Jin;Choi, Eun-Hyuk;Ko, Jin-Seok;Lee, Seung-Yun;Jee, Hong-Kee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2006.05a
    • /
    • pp.1241-1245
    • /
    • 2006
  • 절 성토사면에 구조물을 설치할 경우 가장 중요하게 고려되어야 하는 점은 사면의 안정성 여부이다. 특히, 절 성토사면에 설치된 구조물이 붕괴되는 가장 큰 원인은 뒷채움재 내에 존재하는 수압의 영향이라는 것을 우리는 이미 많은 연구와 경험을 통해 알고 있다. 만일 지하수위가 존재하는 상태에서 단시간에 발생되는 집중호우로 인해 수위가 갑자기 상승하였을 경우, 구조물을 통해 전혀 배수되지 않는다면 절 성토사면의 안정성은 급격히 저하될 것이다. 이러한 사면의 배수압을 소산시킬 수 있는 공법은 여러 가지가 있으나, 본 연구에서는 특히 제주도의 지역적 특성을 고려하여 화산석을 채움재로 사용한 Mattress/Filter를 절 성토사면에 설치함으로써 배수압을 소산시킬 수 있는 방법을 연구하였다. Mattress/Filter는 제방 또는 절 성토사면의 파괴와 침식을 방지하기 위해 사면에 설치하는 육각형의 철망구조로서 유연성, 다공성, 배수성 및 식생성과 같은 특징이 있으며, 콘크리트 구조물과 달리 별도의 배수시설을 필요로 하지 않는 장점이 있다. 또한 본 연구에 사용된 Mattress/Filter의 채움재인 화산석은 현재 제주도 지역에 방대하게 분포되어 있다. 특히 현무암은 제주도 암석 전체의 90%이상을 차지하고 있으며, 투수성이 매우 큰 암석이다. 현무암의 공극률은 그 종류에 따라 $0.02{\sim}0.36$의 범위로 나타난다. 특히, 표선리현무암의 경우 평균 공극률이 0.23으로 나타나 모래의 공극률인 $0.3{\sim}0.8$에 비교하여 볼 때, 연구에 사용된 재료는 아주 우수한 투수성을 가진 것으로 판명된다. 또한 현무암의 경우 암석의 겉 표면이 미세한 다공질 조직으로 이루어져 있다. 따라서 암석자체에 물이 정체될 수 있어 구조물을 통해 배수될 때 암석이 머금고 있는 물로 인해 추가적으로 발생하는 중력은 다른 재료가 가지지 못한 화산석의 또 다른 장점이라 할 수 있다.서는 자료변환 및 가공이 필요하다. 즉, 각 상습침수지구에 필요한 지형도는 국립지리원에서 제작된 1:5,000 수치지형도가 있으나 이는 자료가 방대하고 상습침수지구에 필요하지 않은 자료들을 많이 포함하고 있으므로 상습침수지구의 데이터를 인터넷을 통해 서비스하기 위해서는 많은 불필요한 레이어의 삭제, 서비스 속도를 고려한 데이터의 일반화작업, 지도의 축소.확대 등 자료제공 방식에 따른 작업 그리고 가시성을 고려한 심볼 및 색채 디자인 등의 작업이 수반되어야 하며, 이들을 고려한 인터넷용 GIS기본도를 신규 제작한다. 상습침수지구와 관련된 각종 GIS데이타와 각 기관이 보유하고 있는 공공정보 가운데 공간정보와 연계되어야 하는 자료를 인터넷 GIS를 이용하여 효율적으로 관리하기 위해서는 단계별 구축전략이 필요하다. 따라서 본 논문에서는 인터넷 GIS를 이용하여 상습침수구역관련 정보를 검색, 처리 및 분석할 수 있는 상습침수 구역 종합정보화 시스템을 구축토록 하였다.N, 항목에서 보 상류가 높게 나타났으나, 철거되지 않은 검전보나 안양대교보에 비해 그 차이가 크지 않은 것으로 나타났다.의 기상변화가 자발성 기흉 발생에 영향을 미친다고 추론할 수 있었다. 향후 본 연구에서 추론된 기상변화와 기흉 발생과의 인과관계를 확인하고 좀 더 구체화하기 위한 연구가 필요할 것이다.게 이루어질 수 있을 것으로 기대된다.는 초과수익률이 상승하지만, 이후로는 감소하므로, 반전거래전략을 활용하는 경우 주식투자기간은 24개월이하의 중단기가 적합함을 발견하였다. 이상의 행태적 측면과 투자성과측면의 실증결과를 통하여 한국주식시장에 있어서 시장수익률을 평균적으로 초과할 수 있는 거래전략은 존재하므로 이러한 전략을 개발 및 활용할 수 있으며, 특히, 한국주식시장에 적합한 거래전략은 반전거래전략이고, 이 전략의 유용성은 투자자가 설정한 투자기간보다 더욱 긴 분석기간의 주식가격정보에 의하여 최대한 발휘될 수 있음을 확인하였다.(M1), 무역적자의 폭, 산업

  • PDF

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

The Efficient Merge Operation in Log Buffer-Based Flash Translation Layer for Enhanced Random Writing (임의쓰기 성능향상을 위한 로그블록 기반 FTL의 효율적인 합병연산)

  • Lee, Jun-Hyuk;Roh, Hong-Chan;Park, Sang-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.161-186
    • /
    • 2012
  • Recently, the flash memory consistently increases the storage capacity while the price of the memory is being cheap. This makes the mass storage SSD(Solid State Drive) popular. The flash memory, however, has a lot of defects. In order that these defects should be complimented, it is needed to use the FTL(Flash Translation Layer) as a special layer. To operate restrictions of the hardware efficiently, the FTL that is essential to work plays a role of transferring from the logical sector number of file systems to the physical sector number of the flash memory. Especially, the poor performance is attributed to Erase-Before-Write among the flash memory's restrictions, and even if there are lots of studies based on the log block, a few problems still exists in order for the mass storage flash memory to be operated. If the FAST based on Log Block-Based Flash often is generated in the wide locality causing the random writing, the merge operation will be occur as the sectors is not used in the data block. In other words, the block thrashing which is not effective occurs and then, the flash memory's performance get worse. If the log-block makes the overwriting caused, the log-block is executed like a cache and this technique contributes to developing the flash memory performance improvement. This study for the improvement of the random writing demonstrates that the log block is operated like not only the cache but also the entire flash memory so that the merge operation and the erase operation are diminished as there are a distinct mapping table called as the offset mapping table for the operation. The new FTL is to be defined as the XAST(extensively-Associative Sector Translation). The XAST manages the offset mapping table with efficiency based on the spatial locality and temporal locality.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.