• Title/Summary/Keyword: information collection system

Search Result 1,610, Processing Time 0.029 seconds

A Study on the GEO-Tracking Algorithm of EOTS for the Construction of HILS system (HILS 시스템 구축을 위한 EOTS의 좌표지향 알고리즘 실험에 대한 연구)

  • Gyu-Chan Lee;Jeong-Won Kim;Dong-Gi Kwag
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.663-668
    • /
    • 2023
  • Recently it is very important to collect information such as enemy positions and facilities. To this end, unmanned aerial vehicles such as multicopters have been actively developed, and various mission equipment mounted on unmanned aerial vehicles have also been developed. The coordinate-oriented algorithm refers to an algorithm that calculates a gaze angle so that the mission equipment can fix the gaze at a desired coordinate or position. Flight data and GPS data were collected and simulated using Matlab for coordinate-oriented algorithms. In the simulation using only the coordinate data, the average Pan axis angle was about 0.42°, the Tilt axis was 0.003°~0.43°, and the relatively wide error was about 0.15° on average. As a result of converting this into the distance in the NE direction, the error distance in the N direction was about 2.23m on average, and the error distance in the E direction was about -1.22m on average. The simulation applying the actual flight data showed a result of about 19m@CEP. Therefore, we conducted a study on the self-error of coordinate-oriented algorithms in monitoring and information collection, which is the main task of EOTS, and confirmed that the quantitative target of 500m is satisfied with 30m@CEP, and showed that the desired coordinates can be directed.

Empirical correlation for in-situ deformation modulus of sedimentary rock slope mass and support system recommendation using the Qslope method

  • Yimin Mao;Mohammad Azarafza;Masoud Hajialilue Bonab;Marc Bascompta;Yaser A. Nanehkaran
    • Geomechanics and Engineering
    • /
    • v.35 no.5
    • /
    • pp.539-554
    • /
    • 2023
  • This article is dedicated to the pursuit of establishing a robust empirical relationship that allows for the estimation of in-situ modulus of deformations (Em and Gm) within sedimentary rock slope masses through the utilization of Qslope values. To achieve this significant objective, an expansive and thorough methodology is employed, encompassing a comprehensive field survey, meticulous sample collection, and rigorous laboratory testing. The study sources a total of 26 specimens from five distinct locations within the South Pars (known as Assalouyeh) region, ensuring a representative dataset for robust correlations. The results of this extensive analysis reveal compelling empirical connections between Em, geomechanical characteristics of the rock mass, and the calculated Qslope values. Specifically, these relationships are expressed as follows: Em = 2.859 Qslope + 4.628 (R2 = 0.554), and Gm = 1.856 Qslope + 3.008 (R2 = 0.524). Moreover, the study unravels intriguing insights into the interplay between in-situ deformation moduli and the widely utilized Rock Mass Rating (RMR) computations, leading to the formulation of equations that facilitate predictions: RMR = 18.12 Em0.460 (R2 = 0.798) and RMR = 22.09 Gm0.460 (R2 = 0.766). Beyond these correlations, the study delves into the intricate relationship between RMR and Rock Quality Designation (RQD) with Qslope values. The findings elucidate the following relationships: RMR = 34.05e0.33Qslope (R2 = 0.712) and RQD = 31.42e0.549Qslope (R2 = 0.902). Furthermore, leveraging the insights garnered from this comprehensive analysis, the study offers an empirically derived support system tailored to the distinct characteristics of discontinuous rock slopes, grounded firmly within the framework of the Qslope methodology. This holistic approach contributes significantly to advancing the understanding of sedimentary rock slope stability and provides valuable tools for informed engineering decisions.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Preliminary Study on Actuated Signal Control at Rural Area of Cheon-an City (천안시 외곽지역의 감응식 신호운영을 위한 기초연구)

  • Park, Soon-Yong;Kim, Dong-Nyong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.3
    • /
    • pp.52-63
    • /
    • 2009
  • Recently in Korea, in the case of metropolis, the urban signalized intersections are controlled by traffic information center or ITS center. Cheon-an City also established traffic information center through the 1st.-$\sim$3rd. ITS public construction and has managed this center that includes bus information service, traffic information collection and providing service, parking information service, and traffic responsive control system. In the Cheon-an metropolitan traffic signal operation, traffic signal controllers were grouped by the each main traffic flow axes and performed with coordinated signal timing for the signalized arterials, and also cycle and split changed by realtime traffic demands. Cheon-an urban traffic responsive control system was evaluated by intersection delay and speed, then it was verified that the delay decreased and vehicle speed improved. However, the rural signal control system to connect adjacency town was evaluated to have lower status than urban area due to the unimproved TOD (Time of day) plan. Therefore actuated signal control was examined for substitutive control system in isolated signal intersection. The aim of this article is to compare actuated signal control with TOD mode in the rural intersection of Cheon-an and to fine superiority of these two control mode, with evaluation of vehicle delay by using HCM(2000) method and by micro-simulation CORSlM. The result of field test show that actuated signal control gave better performance in delay comparison than the existing TOD signal control. And simulation outcome verified that non-optimized TOD has higher delay than optimized TOD mode, non-optimal actuated mode, and optimal actuated signal control mode. Particularly, these three modes delays had not different values according to the paired sample t-test. This is because small traffic demands were loaded in each links. This suggested actuated signal control is expected to be more effective than TOD mode in some rural isolated intersections which frequently need to survey for traffic volume.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on Damage factor Analysis of Slope Anchor based on 3D Numerical Model Combining UAS Image and Terrestrial LiDAR (UAS 영상 및 지상 LiDAR 조합한 3D 수치모형 기반 비탈면 앵커의 손상인자 분석에 관한 연구)

  • Lee, Chul-Hee;Lee, Jong-Hyun;Kim, Dal-Joo;Kang, Joon-Oh;Kwon, Young-Hun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.7
    • /
    • pp.5-24
    • /
    • 2022
  • The current performance evaluation of slope anchors qualitatively determines the physical bonding between the anchor head and ground as well as cracks or breakage of the anchor head. However, such performance evaluation does not measure these primary factors quantitatively. Therefore, the time-dependent management of the anchors is almost impossible. This study is an evaluation of the 3D numerical model by SfM which combines UAS images with terrestrial LiDAR to collect numerical data on the damage factors. It also utilizes the data for the quantitative maintenance of the anchor system once it is installed on slopes. The UAS 3D model, which often shows relatively low precision in the z-coordinate for vertical objects such as slopes, is combined with terrestrial LiDAR scan data to improve the accuracy of the z-coordinate measurement. After validating the system, a field test is conducted with ten anchors installed on a slope with arbitrarily damaged heads. The damages (such as cracks, breakages, and rotational displacements) are detected and numerically evaluated through the orthogonal projection of the measurement system. The results show that the introduced system at the resolution of 8K can detect cracks less than 0.3 mm in any aperture with an error range of 0.05 mm. Also, the system can successfully detect the volume of the damaged part, showing that the maximum damage area of the anchor head was within 3% of the original design guideline. Originally, the ground adhesion to the anchor head, where the z-coordinate is highly relevant, was almost impossible to measure with the UAS 3D numerical model alone because of its blind spots. However, by applying the combined system, elevation differences between the anchor bottom and the irregular ground surface was identified so that the average value at 20 various locations was calculated for the ground adhesion. Additionally, rotation angle and displacement of the anchor head less than 1" were detected. From the observations, the validity of the 3D numerical model can obtain quantitative data on anchor damage. Such data collection can potentially create a database that could be used as a fundamental resource for quantitative anchor damage evaluation in the future.

The Study of the Effects of the Enterprise Mobile Social Network Service on User Satisfaction and the Continuous Use Intention (기업 모바일 소셜네트워크서비스 특성요인이 사용자 만족과 지속적 사용의도에 미치는 영향에 관한 연구)

  • Kim, Joon-Hee;Ha, Kyu-Soo
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.135-148
    • /
    • 2012
  • This work is intended to investigate how the factors of enterprise mobile SNS affect user satisfaction and continuous use intention through technology acceptance model proposed by Davis. To achieve the purpose, this researcher explored Information Systems Success model proposed by DeLone & McLean, Technology Acceptance Model proposed by Davis, and Model after Acceptance, and on the basis of the investigation, performed a study. For the data of this work, 9 enterprises, each of which has more than 100 employees and is located in Seoul, were chosen, and a questionnaire survey was conducted on their 276 employees who experienced enterprise mobile SNS. As a data collection tool, a structured self-administered questionnaire was used. For data analysis, SPSS 18.0 and AMOS 18.0 were used for applying Structural Equation modelling. According to the results of this work, three factors of enterprise mobile SNS-systematic factor (system quality, information quality, and service quality), user factor (personal innovation and personal familiarity), social factor (social effects and social interaction)-affected user satisfaction and continuous use intention through perceived availability, perceived easiness, and perceived enjoyment. Also, it was found that the direction of effects matched a theoretical prediction. And, it was revealed that the decision variables and mediating variables significantly affected user satisfaction and continuous use intention. Theoretical and practical meanings were discussed for the study result, and some suggestions were made for the issues of this work and future studies.

Study of Registry Designing for the Hydrographic Data Standard Technology Operation (수로정보 표준기술 운용을 위한 등록소 설계 연구)

  • Kim, Ho-Yoon;Oh, Se-Woong;Shim, Woo-Sung;Suh, Sang-Hyun
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2012.06a
    • /
    • pp.87-88
    • /
    • 2012
  • The IHO developed the S-100 standard for Digital Hydrographic Data in order to assist the proper and efficient use of hydrographic data and information for the safety of navigation and the protection of marine environment. The former model, S-57, was considered outdated to be utilized as the basis for the next-generation Electronic Chart(ENC) product specification. The key feature of the S-100 standard is the use of Registry and its components, Registers. This online-based Registry provides a universal standard that can be implemented in the actual performance of the navigation system with convenience. In the current situation, the only registry is owned by the IHO and it is available to domain experts. However, since S-100 is an international standard operated by an international organization, the process of the changes and updates of the data requires time before immediate implementation when demanded. Therefore, regarding the safety of navigation of the domestic users and mariners, a separate domestic registry is necessary to develop a domestic Information Registry. This study specifically aims to build a Registry based on IHO published S-99 through designing an adequate website dedicated for its purpose to provide collection of definitions and hydrographic data.

  • PDF

A Study on the Recognition of Mid- to Long-term Comprehensive Development Plan for the Library in Gimpo City (김포시 도서관 중장기 종합발전계획 수립을 위한 인식조사 연구)

  • Noh, Younghee;Chang, Inho;Kang, Ji-Hei;Shin, Youngji;Kwak, Woojung
    • Journal of Korean Library and Information Science Society
    • /
    • v.51 no.1
    • /
    • pp.227-253
    • /
    • 2020
  • The purpose of this study is to establish a mid-to-long-term comprehensive development plan for the Gimpo library, and to lay out the overall system of the library scattered by region in Gimpo, and to present Gimpo library's unique vision and strategy to differentiate it from other regions. As a research, surveys and interviews were conducted for the residents of Gimpo-si (library users and beneficiaries) and library personnel. As a result, first, all libraries in Gimpo-si have a user-centered multicultural space service with differentiated concepts and themes of each library based on the functions and roles of data collection, information services, and various programs as public libraries. Seems to be oriented. Second, with regard to Gimpo library collections, it is necessary to first collect books with high usage rate by library and expand them to the needs of consumers. Third, in order to provide the services of the Gimpo library, it is necessary to develop a service that can raise satisfaction by using the citizens' participation and utilizing them in order to derive the service by analyzing other local library programs or services.

Index Management Method using Page Mapping Log in B+-Tree based on NAND Flash Memory (NAND 플래시 메모리 기반 B+ 트리에서 페이지 매핑 로그를 이용한 색인 관리 기법)

  • Kim, Seon Hwan;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.1-12
    • /
    • 2015
  • NAND flash memory has being used for storage systems widely, because it has good features which are low-price, low-power and fast access speed. However, NAND flash memory has an in-place update problem, and therefore it needs FTL(flash translation layer) to run for applications based on hard disk storage. The FTL includes complex functions, such as address mapping, garbage collection, wear leveling and so on. Futhermore, implementation of the FTL on low-power embedded systems is difficult due to its memory requirements and operation overhead. Accordingly, many index data structures for NAND flash memory have being studied for the embedded systems. Overall performances of the index data structures are enhanced by a decreasing of page write counts, whereas it has increased page read counts, as a side effect. Therefore, we propose an index management method using a page mapping log table in $B^+$-Tree based on NAND flash memory to decrease page write counts and not to increase page read counts. The page mapping log table registers page address information of changed index node and then it is exploited when retrieving records. In our experiment, the proposed method reduces the page read counts about 61% at maximum and the page write counts about 31% at maximum, compared to the related studies of index data structures.