• Title/Summary/Keyword: Input Method

Search Result 10,248, Processing Time 0.05 seconds

A Study on Implementation and Performance of the Power Control High Power Amplifier for Satellite Mobile Communication System (위성통신용 전력제어 고출력증폭기의 구현 및 성능평가에 관한 연구)

  • 전중성;김동일;배정철
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.1
    • /
    • pp.77-88
    • /
    • 2000
  • In this paper, the 3-mode variable gain high power amplifier for a transmitter of INMARSAT-B operating at L-band(1626.5-1646.5 MHz) was developed. This SSPA can amplify 42 dBm in high power mode, 38 dBm in medium power mode and 36 dBm in low power mode for INMARSAT-B. The allowable errol sets +1 dBm as the upper limit and -2 dBm as the lower limit, respectively. To simplify the fabrication process, the whole system is designed by two parts composed of a driving amplifier and a high power amplifier. The HP's MGA-64135 and Motorola's MRF-6401 were used for driving amplifier, and the ERICSSON's PTE-10114 and PTF-10021 for the high power amplifier. The SSPA was fabricated by the RP circuits, the temperature compensation circuits and 3-mode variable gain control circuits and 20 dB parallel coupled-line directional coupler in aluminum housing. In addition, the gain control method was proposed by digital attenuator for 3-mode amplifier. Then il has been experimentally verified that the gain is controlled for single tone signal as well as two tone signals. In this case, the SSPA detects the output power by 20 dB parallel coupled-line directional coupler and phase non-splitter amplifier. The realized SSPA has 41.6 dB, 37.6 dB and 33.2 dB for small signal gain within 20 MHz bandwidth, and the VSWR of input and output port is less than 1.3:1. The minimum value of the 1 dB compression point gets more than 12 dBm for 3-mode variable gain high power amplifier. A typical two tone intermodulation point has 36.5 dBc maximum which is single carrier backed off 3 dB from 1 dB compression point. The maximum output power of 43 dBm was achieved at the 1636.5 MHz. These results reveal a high power of 20 Watt, which was the design target.

  • PDF

Research about feature selection that use heuristic function (휴리스틱 함수를 이용한 feature selection에 관한 연구)

  • Hong, Seok-Mi;Jung, Kyung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.281-286
    • /
    • 2003
  • A large number of features are collected for problem solving in real life, but to utilize ail the features collected would be difficult. It is not so easy to collect of correct data about all features. In case it takes advantage of all collected data to learn, complicated learning model is created and good performance result can't get. Also exist interrelationships or hierarchical relations among the features. We can reduce feature's number analyzing relation among the features using heuristic knowledge or statistical method. Heuristic technique refers to learning through repetitive trial and errors and experience. Experts can approach to relevant problem domain through opinion collection process by experience. These properties can be utilized to reduce the number of feature used in learning. Experts generate a new feature (highly abstract) using raw data. This paper describes machine learning model that reduce the number of features used in learning using heuristic function and use abstracted feature by neural network's input value. We have applied this model to the win/lose prediction in pro-baseball games. The result shows the model mixing two techniques not only reduces the complexity of the neural network model but also significantly improves the classification accuracy than when neural network and heuristic model are used separately.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

Morphological Characteristics Optimizing Pocketability and Text Readability for Mobile Information Devices (모바일 정보기기의 소지용이성과 텍스트 가독성을 최적화하기 위한 형태적 특성)

  • Kim, Yeon-Ji;Lee, Woo-Hun
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.323-332
    • /
    • 2006
  • Information devices such as a cellular phone, smart phone, and PDA become smaller to such an extent that people put them into their pockets without any difficulties. This drastic miniaturization causes to deteriorate the readability of text-based contents. The morphological characteristics of size and proportion are supposed to have close relationships with the pocketability and text readability of mobile information devices. This research was aimed to investigate the optimal morphological characteristics to satisfy the two usability factors together. For this purpose, we conducted an controlled experiment, which was designed to evaluate the pocketability according to $size(4000mm^2/8000mm^2)$, proportion(1:1/2:1/3:1), and weight(100g/200g) of information devices as well as participants' pose and carrying method. In the case of male participants putting the models of information device into their pockets, 2:1 morphological proportion was preferred. On the other hand, the female participants carrying the models in their hands preferred 2:1 proportion$(size:4000mm^2{\times}2mm)$ and 3:1 proportion$(size:8000mm^2{\times}20mm)$. For the device in the size of $4000mm^2$, it was found that the weight of device has an significant effect on pocketability. In consequence, 2:1 proportion is optimal to achieve better pocketability. The second experiment was about how text readability is affected by size $(2000mm^2/4000mm^2/8000mm^2)$ and proportion(1:1/2:1/3:1) of information devices as well as interlinear space of displayed text(135%/200%). From this experiment result, it was found that reading speed was increased as line length increased. Regarding the subjective assessment on reading task, 2:1 proportion was strongly preferred. Based on these results, we suggest 2:l proportion as an optimal proportion that satisfy pocketability of mobile information devices and text readability displayed on the screen together. To apply these research outputs to a practical design work efficiently, it is important to take into account the fact that the space for input devices is also required in addition to a display screen.

  • PDF

Development of Cloud and Shadow Detection Algorithm for Periodic Composite of Sentinel-2A/B Satellite Images (Sentinel-2A/B 위성영상의 주기합성을 위한 구름 및 구름 그림자 탐지 기법 개발)

  • Kim, Sun-Hwa;Eun, Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.989-998
    • /
    • 2021
  • In the utilization of optical satellite imagery, which is greatly affected by clouds, periodic composite technique is a useful method to minimize the influence of clouds. Recently, a technique for selecting the optimal pixel that is least affected by the cloud and shadow during a certain period by directly inputting cloud and cloud shadow information during period compositing has been proposed. Accurate extraction of clouds and cloud shadowsis essential in order to derive optimal composite results. Also, in the case of an surface targets where spectral information is important, such as crops, the loss of spectral information should be minimized during cloud-free compositing. In thisstudy, clouds using two spectral indicators (Haze Optimized Tranformation and MeanVis) were used to derive a detection technique with low loss ofspectral information while maintaining high detection accuracy of clouds and cloud shadowsfor cabbage fieldsin the highlands of Gangwon-do. These detection results were compared and analyzed with cloud and cloud shadow information provided by Sentinel-2A/B. As a result of analyzing data from 2019 to 2021, cloud information from Sentinel-2A/B satellites showed detection accuracy with an F1 value of 0.91, but bright artifacts were falsely detected as clouds. On the other hand, the cloud detection result obtained by applying the threshold (=0.05) to the HOT showed relatively low detection accuracy (F1=0.72), but the loss ofspectral information was minimized due to the small number of false positives. In the case of cloud shadows, only minimal shadows were detected in the Sentinel-2A/B additional layer, but when a threshold (= 0.015) was applied to MeanVis, cloud shadowsthat could be distinguished from the topographically generated shadows could be detected. By inputting spectral indicators-based cloud and shadow information,stable monthly cloud-free composited vegetation index results were obtained, and in the future, high-accuracy cloud information of Sentinel-2A/B will be input to periodic cloud-free composite for comparison.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Kriging of Daily PM10 Concentration from the Air Korea Stations Nationwide and the Accuracy Assessment (베리오그램 최적화 기반의 정규크리깅을 이용한 전국 에어코리아 PM10 자료의 일평균 격자지도화 및 내삽정확도 검증)

  • Jeong, Yemin;Cho, Subin;Youn, Youjeong;Kim, Seoyeon;Kim, Geunah;Kang, Jonggu;Lee, Dalgeun;Chung, Euk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.379-394
    • /
    • 2021
  • Air pollution data in South Korea is provided on a real-time basis by Air Korea stations since 2005. Previous studies have shown the feasibility of gridding air pollution data, but they were confined to a few cities. This paper examines the creation of nationwide gridded maps for PM10 concentration using 333 Air Korea stations with variogram optimization and ordinary kriging. The accuracy of the spatial interpolation was evaluated by various sampling schemes to avoid a too dense or too sparse distribution of the validation points. Using the 114,745 matchups, a four-round blind test was conducted by extracting random validation points for every 365 days in 2019. The overall accuracy was stably high with the MAE of 5.697 ㎍/m3 and the CC of 0.947. Approximately 1,500 cases for high PM10 concentration also showed a result with the MAE of about 12 ㎍/m3 and the CC over 0.87, which means that the proposed method was effective and applicable to various situations. The gridded maps for daily PM10 concentration at the resolution of 0.05° also showed a reasonable spatial distribution, which can be used as an input variable for a gridded prediction of tomorrow's PM10 concentration.

Feasibility of Tax Increase in Korean Welfare State via Estimation of Optimal Tax burden Ratio (적정조세부담률 추정을 통한 한국 복지국가 증세가능성에 관한 연구)

  • Kim, SeongWook
    • 한국사회정책
    • /
    • v.20 no.3
    • /
    • pp.77-115
    • /
    • 2013
  • The purpose of this study is to present empirical evidence for discussion of financing social welfare via estimating optimal tax burden in the main member countries of the OECD by using Hausman-Taylor method considering endogeneity of explanatory variables. Also, the author produced an international tax comparison index reflecting theoretical hypotheses on revenue-expenditure nexus within a model to compare real tax burden by countries and to examine feasibility of tax increase in Korea. As a result of the analysis, the higher the level of tax burden was, the higher the level of welfare expenditure was, indicating the connection between high burden and high welfare from the aspect of scale. The results also indicated that the subject countries recently entered into the state of low tax burden. Meanwhile, Korea had maintained low burden until the late 1990s but the tax burden soared up since the financial crisis related to the IMF. However, due to the impact of foreign economy and the tax reduction policy, it reentered into the low-burden state after 2009. On the other hand, the degree of social welfare expenditure's reducing tax burden has been gradually enhanced since the crisis. In this context, the current optimal tax burden ratio of Korea as of 2010 may be 25.8%~26.5% of GDP based on input of welfare expenditure variables, a percent that Korea was investigated to be a 'high tax burden-low ITC' country whose tax increase of 0.7~1.4%p may be feasible and that the success of tax system reform for tax increase might be higher probability when compare to others. However, measures of increasing social security contributions and consumption tax were analyzed to be improper from the aspect of managing finance when compared to increase in other tax items, considering the relatively higher ITC. Tax increase is not necessarily required though there may be room for tax increase; the optimal tax burden ratio can be understood as the level that may be achieved on average when compared to other nations, not as the "proper" level. Thus, discussion of tax increase should be accompanied with comprehensive understanding of models of economic developmental difference from nations and institutional & historical attributes included in specific tax mix.

Analysis of Economic and Environmental Effects of Remanufactured Furniture Through Case Studies (사례분석을 통한 사용 후 가구 재제조의 경제적·환경적 효과 분석)

  • Lee, Jong-Hyo;Kang, Hong-Yoon;Hwang, Yong Woo;Hwang, Hyeon-Jeong
    • Resources Recycling
    • /
    • v.31 no.5
    • /
    • pp.67-76
    • /
    • 2022
  • The furniture industry has a high possibility to create value-added and a high potential to create new occupations due to the characteristics of the industry, which mainly consists of small and medium-sized enterprises (SMEs). However, the used furniture, which has sufficient reuse value, is also crushed and used as solid refuse fuel (SRF) recently. Besides, the number of waste treatment companies continues to decrease, and it occurs congestion of wood waste. As a way to solve the issue, a business model development of remanufacturing used furniture can be suggested as an alternative due to its high circular economic efficiency. Remanufacturing business including furniture industry creates positive effects in various aspects such as economic, environmental and job creation. In other words, remanufacturing is an effective recycling way to reduce input resources and energy in the production process. The results of economic analysis show that the expected annual revenue from the single worker furniture remanufacturing site was 104 million won which is 3.11 times more than the average income of a single-worker household in Korea and its B/C ratio was estimated about 30 which means high business feasibility. Revenue through furniture remanufacturing also showed 320 times higher than that of SRF production from the perspective of weight. In addition, it is shown that the GHGs reduction from the furniture remanufacturing is 2.2 ton CO2-eq. per year, which is similar to the amount of GHGs absorption effect of 937 pine trees or 622 Korean oak trees annually. Thus the results of this study demonstrate that it is important to adopt an appropriate recycling method considering the economic and environmental effects at the end-of-life stage.

Deep Learning Approaches for Accurate Weed Area Assessment in Maize Fields (딥러닝 기반 옥수수 포장의 잡초 면적 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Wan-Gyu Sang;Ho-young Ban;Sungyul Chang;Jae-Kyeong Baek;Yun-Ho Lee;Woo-jin Im;Myung-chul Seo;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.17-27
    • /
    • 2023
  • Weeds are one of the factors that reduce crop yield through nutrient and photosynthetic competition. Quantification of weed density are an important part of making accurate decisions for precision weeding. In this study, we tried to quantify the density of weeds in images of maize fields taken by unmanned aerial vehicle (UAV). UAV image data collection took place in maize fields from May 17 to June 4, 2021, when maize was in its early growth stage. UAV images were labeled with pixels from maize and those without and the cropped to be used as the input data of the semantic segmentation network for the maize detection model. We trained a model to separate maize from background using the deep learning segmentation networks DeepLabV3+, U-Net, Linknet, and FPN. All four models showed pixel accuracy of 0.97, and the mIOU score was 0.76 and 0.74 in DeepLabV3+ and U-Net, higher than 0.69 for Linknet and FPN. Weed density was calculated as the difference between the green area classified as ExGR (Excess green-Excess red) and the maize area predicted by the model. Each image evaluated for weed density was recombined to quantify and visualize the distribution and density of weeds in a wide range of maize fields. We propose a method to quantify weed density for accurate weeding by effectively separating weeds, maize, and background from UAV images of maize fields.