• Title/Summary/Keyword: ICT Technology

Search Result 2,795, Processing Time 0.025 seconds

Theoretical Research for Unmanned Aircraft Electromagnetic Survey: Electromagnetic Field Calculation and Analysis by Arbitrary Shaped Transmitter-Loop (무인 항공 전자탐사 이론 연구: 임의 모양의 송신루프에 의한 전자기장 반응 계산 및 분석)

  • Bang, Minkyu;Oh, Seokmin;Seol, Soon Jee;Lee, Ki Ha;Cho, Seong-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.150-161
    • /
    • 2018
  • Recently, unmanned aircraft EM (electromagnetic) survey based on ICT (Information and Communication Technology) has been widely utilized because of the efficiency in regional survey. We performed the theoretical study on the unmanned airship EM system developed by KIGAM (Korea Institute of Geoscience and Mineral resources) as part of the practical application of unmanned aircraft EM survey. Since this system has different configurations of transmitting and receiving loops compared to the conventional aircraft EM systems, a new technique is required for the appropriate interpretation of measured responses. Therefore, we proposed a method to calculate the EM field for the arbitrary shaped transmitter and verified its validity through the comparison with analytic solution for circular loop. In addition, to simulate the magnetic responses by three-dimensionally (3D) distributed anomalies, we have adapted our algorithm to 3D frequency-domain EM modeling algorithm based on the edge-FEM (finite element method). Though the analysis on magnetic field responses from a subsurface anomaly, it was found that the response decreases as the depth of the anomaly increases or the flight altitude increases. Also, it was confirmed that the response became smaller as the resistivity of the anomaly increases. However, a nonlinear trend of the out-of-phase component is shown depending on the depth of the anomaly and the used frequency, that makes it difficult to apply simple analysis based on the mapping of the magnitude of the responses and can cause the non-uniqueness problem in calculating the apparent resistivity. Thus, it is a prerequisite to analyze the appropriate frequency band and flight altitude considering the purpose of the survey and the site conditions when conducting a survey using the unmanned aircraft EM system.

A CF-based Health Functional Recommender System using Extended User Similarity Measure (확장된 사용자 유사도를 이용한 CF-기반 건강기능식품 추천 시스템)

  • Sein Hong;Euiju Jeong;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-17
    • /
    • 2023
  • With the recent rapid development of ICT(Information and Communication Technology) and the popularization of digital devices, the size of the online market continues to grow. As a result, we live in a flood of information. Thus, customers are facing information overload problems that require a lot of time and money to select products. Therefore, a personalized recommender system has become an essential methodology to address such issues. Collaborative Filtering(CF) is the most widely used recommender system. Traditional recommender systems mainly utilize quantitative data such as rating values, resulting in poor recommendation accuracy. Quantitative data cannot fully reflect the user's preference. To solve such a problem, studies that reflect qualitative data, such as review contents, are being actively conducted these days. To quantify user review contents, text mining was used in this study. The general CF consists of the following three steps: user-item matrix generation, Top-N neighborhood group search, and Top-K recommendation list generation. In this study, we propose a recommendation algorithm that applies an extended similarity measure, which utilize quantified review contents in addition to user rating values. After calculating review similarity by applying TF-IDF, Word2Vec, and Doc2Vec techniques to review content, extended similarity is created by combining user rating similarity and quantified review contents. To verify this, we used user ratings and review data from the e-commerce site Amazon's "Health and Personal Care". The proposed recommendation model using extended similarity measure showed superior performance to the traditional recommendation model using only user rating value-based similarity measure. In addition, among the various text mining techniques, the similarity obtained using the TF-IDF technique showed the best performance when used in the neighbor group search and recommendation list generation step.

A Study on the Performance Verification Method of Small-Sized LTE-Maritime Transceiver (소형 초고속해상무선통신망 송수신기 성능 검증 방안에 관한 연구)

  • Seok Woo;Bu-young Kim;Woo-Seong Shim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.902-909
    • /
    • 2023
  • This study evaluated the performance test of a small-sized LTE-Maritime(LTE-M) transceiver that was developed and promoted to expand the use of intelligent maritime traf ic information services led by the Ministry of Oceans and Fisheries with the aim of supporting the prevention of maritime accidents. Accoriding to statistics, approximately 30% of all marine accidents in Korean water occur with ships weighing less than 3 tons. Therefore, the blind spots of maritime safety must be supplemented through the development of small-sized transceivers. The small transceiver may be used in fishing boats that are active near coastal waters and in water leisure equipment near the coastline. Therefore, verifying whether sufficient performance and stable communication quality are provided is necessary, considering the environment of their real usage. In this study, we reviewed the communication quality goals of the LTE-M network and the performance requirements of small-sized transceivers suggested by the Ministry of Oceans and Fisheries, and proposed a test plan to appropriately evaluate the performance of small-sized transceivers. The validity of the proposed test method was verified for six real-sea areas with a high frequency of marine accidents. Consequently, the downlink and uplink transmission speeds of the small-sized LTE-M transceiver showed performances of 9 Mbps or more and 3 Mbps or more, respectively. In addition, using the coverage analysis system, coverage of more than 95% and 100% were confirmed in the intensive management zone (0-30 km) and interesting zone (30-50 km), respectively. The performance evaluation method and test results proposed in this paper are expected to be used as reference materials for verifying the performance of transceivers, contributing to the spread of government-promoted e-navigation services and small-sized transceivers.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.