• Title/Summary/Keyword: grid technology

Search Result 2,296, Processing Time 0.026 seconds

Obstacle Avoidance of Unmanned Surface Vehicle based on 3D Lidar for VFH Algorithm (무인수상정의 장애물 회피를 위한 3차원 라이다 기반 VFH 알고리즘 연구)

  • Weon, Ihn-Sik;Lee, Soon-Geul;Ryu, Jae-Kwan
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.945-953
    • /
    • 2018
  • In this paper, we use 3-D LIDAR for obstacle detection and avoidance maneuver for autonomous unmanned operation. It is aimed to avoid obstacle avoidance in unmanned water under marine condition using only single sensor. 3D lidar uses Quanergy's M8 sensor to collect surrounding obstacle data and includes layer information and intensity information in obstacle information. The collected data is converted into a three-dimensional Cartesian coordinate system, which is then mapped to a two-dimensional coordinate system. The data including the obstacle information converted into the two-dimensional coordinate system includes noise data on the water surface. So, basically, the noise data generated regularly is defined by defining a hypothetical region of interest based on the assumption of unmanned water. The noise data generated thereafter are set to a threshold value in the histogram data calculated by the Vector Field Histogram, And the noise data is removed in proportion to the amount of noise. Using the removed data, the relative object was searched according to the unmanned averaging motion, and the density map of the data was made while keeping one cell on the virtual grid map. A polar histogram was generated for the generated obstacle map, and the avoidance direction was selected using the boundary value.

An historical analysis on the carbon lock-in of Korean electricity industry (한국 전력산업의 탄소고착에 대한 역사적 분석)

  • Chae, Yeoungjin;Roh, Keonki;Park, Jung-Gu
    • Journal of Energy Engineering
    • /
    • v.23 no.2
    • /
    • pp.125-148
    • /
    • 2014
  • This paper performs a historical analysis on the various factors contributing to the current carbon lock-in of Korean electricity industry by using techo-institutional complex. The possibilities of the industry's carbon lock-out toward more sustainable development are also investigated. It turns out that market, firm, consumer, and government factors are all responsible for the development of the carbon lock-in of Korean power industry; the Korean government consistently favoring large power plants based on the economy of scale; below-cost electricity tariff; inflation policy to suppress increases in power price; rapid demand growth in summer and winter seasons; rigidities of electricity tariff; and expansion of gas-fired and imported coal-fired large power plants. On the other hand, except for nuclear power generation and smart grid, environment laws and new and renewable energy laws are the other remaining factors contributing to the carbon lock-out. Considering three key points that Korea is an export-oriented economy, the generation mix is the most critical factor to decide the amounts of carbon emission in the power industry, and the share of industry and commercial power consumption is over 85%, it is unlikely that Korea will achieve the carbon lock-out of power industry in the near future. Therefore, there are needs for more integrated approaches from market, firm, consumer, and government all together in order to achieve the carbon lock-out in the electricity industry. Firstly, from the market perspective, it is necessary to persue more active new and renewable energy penetration and to guarantee consumer choices by mitigating the incumbent's monopoly power as in the OECD countries. Secondly, from the firm perspective, the promotion of distributed energy system is urgent, which includes new and renewable resources and demand resources. Thirdly, from the consumer perspective, more green choices in the power tariff and customer awareness on the carbon lock-out are needed. Lastly, the government shall urgently improve power planning frameworks to include the various externalities that were not properly reflected in the past such as environmental and social conflict costs.

Blended IT/STEM Education for Students in Developing Countries: Experiences in Tanzania (개발도상국 학생들을 위한 블랜디드 IT/STEM교육: 탄자니아에서의 경험 및 시사점)

  • Yoon Rhee, Ji-Young;Ayo, Heriel;Rhee, Herb S.
    • Journal of Appropriate Technology
    • /
    • v.6 no.2
    • /
    • pp.151-162
    • /
    • 2020
  • Education is one of the priority sectors specified in Tanzania, and it has committed to provide 11 years of compulsory free basic education for all from pre-primary to lower secondary level. Despite the Government's efforts to provide free basic education to all children, there are 2.0 million (23.2 per cent) out of 8.5 million children at the primary school age of 7-13, who are out of school in Tanzania. The ICT class should be offered as a regular class in all secondary schools in Tanzania, recommended by the ministry of education. However, many schools are struggling to implement this mandate. Most of schools offer the ICT class with theory without any real hardware. Some schools were given with computers but they were not maintained for operation. There is a huge task to make ICT education universal. Main issues include: remoteness (off-grid area), lack of ICT teachers, lack of resources such as hardware, infrastructure, and lack of practical lessons or projects to be used at schools. An innovative blended ICT/STEM education program is being conducted not only for Tanzanian public and private/international schools, but also for out-of-school adolescents through institutions, NGO centers, home visits and at the E3 Empower academy center. For effective STEM education to take place and remain sustainable, more practical curriculum, and close-up teacher support need to be accompanied concurrently. Practical, project-based simple coding lessons have been developed and employed that students experience true learning. The effectiveness of the curriculum has been demonstrated in various project centers, and it showed that students are showing new interests in exploring new discovery, even though this was a totally new area for them. It has been designed for an easy replication, thus students who learned can repeat the lessons themselves to other students. The ultimate purpose of this project is to have IT education offered as universally as possible throughout the whole Tanzania. Quality education for all children is a key for better future for all. Previously it was hoped that education with discipline will improve the active learning. But now more than ever, we believe that children have the ability to learn on their own with given proper STEM education tools, guidelines and environment. This gives promising hope to all of us, including those in the developing countries.

Change Analysis of Aboveground Forest Carbon Stocks According to the Land Cover Change Using Multi-Temporal Landsat TM Images and Machine Learning Algorithms (다시기 Landsat TM 영상과 기계학습을 이용한 토지피복변화에 따른 산림탄소저장량 변화 분석)

  • LEE, Jung-Hee;IM, Jung-Ho;KIM, Kyoung-Min;HEO, Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.4
    • /
    • pp.81-99
    • /
    • 2015
  • The acceleration of global warming has required better understanding of carbon cycles over local and regional areas such as the Korean peninsula. Since forests serve as a carbon sink, which stores a large amount of terrestrial carbon, there has been a demand to accurately estimate such forest carbon sequestration. In Korea, the National Forest Inventory(NFI) has been used to estimate the forest carbon stocks based on the amount of growing stocks per hectare measured at sampled location. However, as such data are based on point(i.e., plot) measurements, it is difficult to identify spatial distribution of forest carbon stocks. This study focuses on urban areas, which have limited number of NFI samples and have shown rapid land cover change, to estimate grid-based forest carbon stocks based on UNFCCC Approach 3 and Tier 3. Land cover change and forest carbon stocks were estimated using Landsat 5 TM data acquired in 1991, 1992, 2010, and 2011, high resolution airborne images, and the 3rd, 5th~6th NFI data. Machine learning techniques(i.e., random forest and support vector machines/regression) were used for land cover change classification and forest carbon stock estimation. Forest carbon stocks were estimated using reflectance, band ratios, vegetation indices, and topographical indices. Results showed that 33.23tonC/ha of carbon was sequestrated on the unchanged forest areas between 1991 and 2010, while 36.83 tonC/ha of carbon was sequestrated on the areas changed from other land-use types to forests. A total of 7.35 tonC/ha of carbon was released on the areas changed from forests to other land-use types. This study was a good chance to understand the quantitative forest carbon stock change according to the land cover change. Moreover the result of this study can contribute to the effective forest management.

Survery on Business of the Departments of Radiology in Health Centers (보건소(保健所)의 방사선과(放射線科) 업무(業務)에 관한 조사연구(調査硏究))

  • Choi, Jong-Hak;Jeon, Man-Jin;Huh, Joon;Park, Sung-Ock
    • Journal of radiological science and technology
    • /
    • v.8 no.2
    • /
    • pp.21-28
    • /
    • 1985
  • We serveyed the actual condition of business of the departments of radiology of 45 health conters (except 3) in the area of Seoul, Kyungki and Inchon from March, 1984 to November, 1984. The results are summarized as follows : 1. T.O. of the radiologic technologist is three persons in each health center of Seoul area, and one person in each one of Kyungki and Inchon area. P.O. is 2-5 persons in Seoul area, 1-2 persons in Kyungki or Inchon area. 2. The number of all the radiologic technologists employed now is 75 persons, and among all of them, when analized by position class 7th is 54.7%, class 8th 28.0%, class 9th is 13.3%, and class 6th is 2.7%, and by sex, female is 68.0%, male is 32.0%, by educational background, for the most part, junior college graduates come to 73.3%, by age group 60% of them is in their twenties, 16.0% is in their thirties and forties, 8.0% is in their fifties, and by career after certificate 60% have the career of 1-5 years, 13.3% have the one of 6-7 years or mor than 21 years, and 6.7% have the one of 11-15 years of 16-20 years. 3. All the diagnostic x-ray equipment being kept is 62, and among them flxing equipment is 71.0%, portable equipment is 29.0% and by rating of X-ray equipment, maximum tube current 100 mA is 46.8%, maximum KV 100KVP is 72.6%, the most part. 4. Photofluorographic camera and hood are equipped in every health center. While, as to the radiographic cassettes, $14{\times}14"$ cassetts are equipped in every health center, but cassettes of other sizes are in half of them. 5. Bucky's table is equipped in 11.9% health centers, the automatic processor is in 21.4%, the photofluorographic film changer is 9.5%, the grid is 73.8%, the protective apron is in 88.1%, and the protective glove is in 57.1% health centers. 6. The number of the people who got the x-ray examination for one year (by the year 1989) is the most, 1,000-6,000 in direct radiography of the chest, or 15,0001-45,000 in the health centers of Seoul area, 5,000-20,000 in Kyungki and Inchon area in photofluorography of the chest. Moreover, other radiographies are being taken extremely limitedly in all health centers. 7. In processing types of x-ray film, automatic processing is used in 9 health centers (21.4%), manual tank processing is in 30 (71.4%), and manual tray processing in 3 (7.2%). 8. As for collimation of x-ray exposure field, "continual using restricted by a subject size" has the most part, 78.6% "restricted using at every radiography" has 19%, and the case of "never considered" has 2.4% response. 9. As for the dosimeter used for radiation control, film badge (35.7%) and pocket dosimeter (26.2%) are used, and in 38.1% health centers the dosimeter is not equipped at all. Consideration of the previous radiation exposure is being done in only one health center. 10. Reading of radiographs is mainly depended on the radiologists electively (45.2%) or on the genral practitioners(45.2%).

  • PDF

Improvement of the Fishing Gear and Fishing Method of the East-Sea Trawl Fishery (동해구 트롤 어구어법의 개량)

  • 권병국;이주희;이춘우;김형석;김용식;안영일;김정문
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.37 no.2
    • /
    • pp.106-116
    • /
    • 2001
  • A serious of studies on the fishing gear and system of the East Sea trawl fishery was carried out to improve the fishing efficiency and the working conditions. As the first step of these studies, the fishing gear and system of the traditional East Sea trawl were checked in order to solve the some problems, such as the poor sheering efficiency of net mouth, the inconvenient fishing system of the side trawl and etc. And then the fishing system was reorganized from the side trawl into the stern trawl by setting up the net drum system on the stern deck, and introduction of two types of new designed nets, one for mainly the midwater trawl and the other for the bottom trawl. The results of the field experiment on the modified system and nets can be summarized as follows : 1. the modified system was well worked and could save the man-labour by about 80%. 2. The sheering efficiency of the improved net, A type was improved to 20 m height and 30 m width in the net mouth, and that of B type net, to 10 m height and 33 m width, compared with 1.5 m height and 15 m width in the traditional net. 3. Catch efficiency of pink shrimp in A or B type net was better about 3 or 5 times than that of traditional net, and in B net, for herring and other bottom fishes is better about 2 times than that of the traditional net.

  • PDF

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Evaluation of Contralateral Breast Surface Dose in FIF (Field In Field) Tangential Irradiation Technique for Patients Undergone Breast Conservative Surgery (보존적 유방절제 환자의 방사선치료 시 종속조사면 병합방법에 따른 반대편 유방의 표면선량평가)

  • Park, Byung-Moon;Bang, Dong-Wan;Bae, Yong-Ki;Lee, Jeong-Woo;Kim, You-Hyun
    • Journal of radiological science and technology
    • /
    • v.31 no.4
    • /
    • pp.401-406
    • /
    • 2008
  • The aim of this study is to evaluate contra-lateral breast (CLB) surface dose in Field-in-Field (FIF) technique for breast conserving surgery patients. For evaluation of surface dose in FIF technique, we have compared with other techniques, which were open fields (Open), metal wedge (MW), and enhanced dynamic wedge (EDW) techniques under same geometrical condition and prescribed dose. The three dimensional treatment planning system was used for dose optimization. For the verification of dose calculation, measurements using MOSFET detectors with Anderson Rando phantom were performed. The measured points for four different techniques were at the depth of 0cm (epidermis) and 0.5cm bolus (dermis), and spacing toward 2cm, 4cm, 6cm, 8cm, 10cm apart from the edge of tangential medial beam. The dose calculations were done in 0.25cm grid resolution by modified Batho method for inhomogeneity correction. In the planning results, the surface doses were differentiated in the range of $19.6{\sim}36.9%$, $33.2{\sim}138.2%$ for MW, $1.0{\sim}7.9%$, $1.6{\sim}37.4%$ for EDW, and for FIF at the depth of epidermis and dermis as compared to Open respectively. In the measurements, the surface doses were differentiated in the range of $11.1{\sim}71%$, $22.9{\sim}161%$ for MW, $4.1{\sim}15.5%$, $8.2{\sim}37.9%$ for EDW, and 4.9% for FIF at the depth of epidermis and dermis as compared to Open respectively. The surface doses were considered as underestimating in the planning calculation as compared to the measurement with MOSFET detectors. Was concluded as the lowest one among the techniques, even if it was compared with Open method. Our conclusion could be stated that the FIF technique could make the optimum dose distribution in Breast target, while effectively reduce the probability of secondary carcinogenesis due to undesirable scattered radiation to contra-lateral breast.

  • PDF

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

Prediction of Urban Flood Extent by LSTM Model and Logistic Regression (LSTM 모형과 로지스틱 회귀를 통한 도시 침수 범위의 예측)

  • Kim, Hyun Il;Han, Kun Yeun;Lee, Jae Yeong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.3
    • /
    • pp.273-283
    • /
    • 2020
  • Because of climate change, the occurrence of localized and heavy rainfall is increasing. It is important to predict floods in urban areas that have suffered inundation in the past. For flood prediction, not only numerical analysis models but also machine learning-based models can be applied. The LSTM (Long Short-Term Memory) neural network used in this study is appropriate for sequence data, but it demands a lot of data. However, rainfall that causes flooding does not appear every year in a single urban basin, meaning it is difficult to collect enough data for deep learning. Therefore, in addition to the rainfall observed in the study area, the observed rainfall in another urban basin was applied in the predictive model. The LSTM neural network was used for predicting the total overflow, and the result of the SWMM (Storm Water Management Model) was applied as target data. The prediction of the inundation map was performed by using logistic regression; the independent variable was the total overflow and the dependent variable was the presence or absence of flooding in each grid. The dependent variable of logistic regression was collected through the simulation results of a two-dimensional flood model. The input data of the two-dimensional flood model were the overflow at each manhole calculated by the SWMM. According to the LSTM neural network parameters, the prediction results of total overflow were compared. Four predictive models were used in this study depending on the parameter of the LSTM. The average RMSE (Root Mean Square Error) for verification and testing was 1.4279 ㎥/s, 1.0079 ㎥/s for the four LSTM models. The minimum RMSE of the verification and testing was calculated as 1.1655 ㎥/s and 0.8797 ㎥/s. It was confirmed that the total overflow can be predicted similarly to the SWMM simulation results. The prediction of inundation extent was performed by linking the logistic regression with the results of the LSTM neural network, and the maximum area fitness was 97.33 % when more than 0.5 m depth was considered. The methodology presented in this study would be helpful in improving urban flood response based on deep learning methodology.