• Title/Summary/Keyword: Information & Communication Technology (ICT)

Search Result 996, Processing Time 0.027 seconds

Design of Cloud-Based Data Analysis System for Culture Medium Management in Smart Greenhouses (스마트온실 배양액 관리를 위한 클라우드 기반 데이터 분석시스템 설계)

  • Heo, Jeong-Wook;Park, Kyeong-Hun;Lee, Jae-Su;Hong, Seung-Gil;Lee, Gong-In;Baek, Jeong-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.4
    • /
    • pp.251-259
    • /
    • 2018
  • BACKGROUND: Various culture media have been used for hydroponic cultures of horticultural plants under the smart greenhouses with natural and artificial light types. Management of the culture medium for the control of medium amounts and/or necessary components absorbed by plants during the cultivation period is performed with ICT (Information and Communication Technology) and/or IoT (Internet of Things) in a smart farm system. This study was conducted to develop the cloud-based data analysis system for effective management of culture medium applying to hydroponic culture and plant growth in smart greenhouses. METHODS AND RESULTS: Conventional inorganic Yamazaki and organic media derived from agricultural byproducts such as a immature fruit, leaf, or stem were used for hydroponic culture media. Component changes of the solutions according to the growth stage were monitored and plant growth was observed. Red and green lettuce seedlings (Lactuca sativa L.) which developed 2~3 true leaves were considered as plant materials. The seedlings were hydroponically grown in the smart greenhouse with fluorescent and light-emitting diodes (LEDs) lights of $150{\mu}mol/m^2/s$ light intensity for 35 days. Growth data of the seedlings were classified and stored to develop the relational database in the virtual machine which was generated from an open stack cloud system on the base of growth parameter. Relation of the plant growth and nutrient absorption pattern of 9 inorganic components inside the media during the cultivation period was investigated. The stored data associated with component changes and growth parameters were visualized on the web through the web framework and Node JS. CONCLUSION: Time-series changes of inorganic components in the culture media were observed. The increases of the unfolded leaves or fresh weight of the seedlings were mainly dependent on the macroelements such as a $NO_3-N$, and affected by the different inorganic and organic media. Though the data analysis system was developed, actual measurement data were offered by using the user smart device, and analysis and comparison of the data were visualized graphically in time series based on the cloud database. Agricultural management in data visualization and/or plant growth can be implemented by the data analysis system under whole agricultural sites regardless of various culture environmental changes.

A CF-based Health Functional Recommender System using Extended User Similarity Measure (확장된 사용자 유사도를 이용한 CF-기반 건강기능식품 추천 시스템)

  • Sein Hong;Euiju Jeong;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-17
    • /
    • 2023
  • With the recent rapid development of ICT(Information and Communication Technology) and the popularization of digital devices, the size of the online market continues to grow. As a result, we live in a flood of information. Thus, customers are facing information overload problems that require a lot of time and money to select products. Therefore, a personalized recommender system has become an essential methodology to address such issues. Collaborative Filtering(CF) is the most widely used recommender system. Traditional recommender systems mainly utilize quantitative data such as rating values, resulting in poor recommendation accuracy. Quantitative data cannot fully reflect the user's preference. To solve such a problem, studies that reflect qualitative data, such as review contents, are being actively conducted these days. To quantify user review contents, text mining was used in this study. The general CF consists of the following three steps: user-item matrix generation, Top-N neighborhood group search, and Top-K recommendation list generation. In this study, we propose a recommendation algorithm that applies an extended similarity measure, which utilize quantified review contents in addition to user rating values. After calculating review similarity by applying TF-IDF, Word2Vec, and Doc2Vec techniques to review content, extended similarity is created by combining user rating similarity and quantified review contents. To verify this, we used user ratings and review data from the e-commerce site Amazon's "Health and Personal Care". The proposed recommendation model using extended similarity measure showed superior performance to the traditional recommendation model using only user rating value-based similarity measure. In addition, among the various text mining techniques, the similarity obtained using the TF-IDF technique showed the best performance when used in the neighbor group search and recommendation list generation step.

A Study on the Performance Verification Method of Small-Sized LTE-Maritime Transceiver (소형 초고속해상무선통신망 송수신기 성능 검증 방안에 관한 연구)

  • Seok Woo;Bu-young Kim;Woo-Seong Shim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.902-909
    • /
    • 2023
  • This study evaluated the performance test of a small-sized LTE-Maritime(LTE-M) transceiver that was developed and promoted to expand the use of intelligent maritime traf ic information services led by the Ministry of Oceans and Fisheries with the aim of supporting the prevention of maritime accidents. Accoriding to statistics, approximately 30% of all marine accidents in Korean water occur with ships weighing less than 3 tons. Therefore, the blind spots of maritime safety must be supplemented through the development of small-sized transceivers. The small transceiver may be used in fishing boats that are active near coastal waters and in water leisure equipment near the coastline. Therefore, verifying whether sufficient performance and stable communication quality are provided is necessary, considering the environment of their real usage. In this study, we reviewed the communication quality goals of the LTE-M network and the performance requirements of small-sized transceivers suggested by the Ministry of Oceans and Fisheries, and proposed a test plan to appropriately evaluate the performance of small-sized transceivers. The validity of the proposed test method was verified for six real-sea areas with a high frequency of marine accidents. Consequently, the downlink and uplink transmission speeds of the small-sized LTE-M transceiver showed performances of 9 Mbps or more and 3 Mbps or more, respectively. In addition, using the coverage analysis system, coverage of more than 95% and 100% were confirmed in the intensive management zone (0-30 km) and interesting zone (30-50 km), respectively. The performance evaluation method and test results proposed in this paper are expected to be used as reference materials for verifying the performance of transceivers, contributing to the spread of government-promoted e-navigation services and small-sized transceivers.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Theoretical Research for Unmanned Aircraft Electromagnetic Survey: Electromagnetic Field Calculation and Analysis by Arbitrary Shaped Transmitter-Loop (무인 항공 전자탐사 이론 연구: 임의 모양의 송신루프에 의한 전자기장 반응 계산 및 분석)

  • Bang, Minkyu;Oh, Seokmin;Seol, Soon Jee;Lee, Ki Ha;Cho, Seong-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.150-161
    • /
    • 2018
  • Recently, unmanned aircraft EM (electromagnetic) survey based on ICT (Information and Communication Technology) has been widely utilized because of the efficiency in regional survey. We performed the theoretical study on the unmanned airship EM system developed by KIGAM (Korea Institute of Geoscience and Mineral resources) as part of the practical application of unmanned aircraft EM survey. Since this system has different configurations of transmitting and receiving loops compared to the conventional aircraft EM systems, a new technique is required for the appropriate interpretation of measured responses. Therefore, we proposed a method to calculate the EM field for the arbitrary shaped transmitter and verified its validity through the comparison with analytic solution for circular loop. In addition, to simulate the magnetic responses by three-dimensionally (3D) distributed anomalies, we have adapted our algorithm to 3D frequency-domain EM modeling algorithm based on the edge-FEM (finite element method). Though the analysis on magnetic field responses from a subsurface anomaly, it was found that the response decreases as the depth of the anomaly increases or the flight altitude increases. Also, it was confirmed that the response became smaller as the resistivity of the anomaly increases. However, a nonlinear trend of the out-of-phase component is shown depending on the depth of the anomaly and the used frequency, that makes it difficult to apply simple analysis based on the mapping of the magnitude of the responses and can cause the non-uniqueness problem in calculating the apparent resistivity. Thus, it is a prerequisite to analyze the appropriate frequency band and flight altitude considering the purpose of the survey and the site conditions when conducting a survey using the unmanned aircraft EM system.