• Title/Summary/Keyword: 응답데이터 분석시스템

Search Result 218, Processing Time 0.025 seconds

수학적 모폴로지의 경계치 변화에 의한 도시환경 지형지물 추출 및 분리응용

  • O, Se-Gyeong;Lee, Gi-Won
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.139-143
    • /
    • 2004
  • 최근 고 해상도 위성영상정보의 민간 활용에 대한 수요가 증가하면서 기존의 공간 정보를 다루는 여러 응용분야에서 이에 관련된 많은 연구를 하고 있다. 도시교통 환경 분석을 위하여 위성영상정보를 처리하는 과정에 있어서 도로, 건물, 기타 선 구조와 같은 지형지물을 분석하는 과정은 사용자에 따라 주관적일 수 있다. 이러한 배경에서 수학적 그레이 레벨 모폴로지는 하나의 효과적인 접근으로 간주된다. 본 연구에서 지형지물 추출을 위해 윈도우 운영체제에서 실행되는 실질적인 응용 프로그램을 구현하였다. 이 프로그램에서 주요한 지형지물은 그레이 레벨 영상을 이용하여 개방(opening), 폐쇄(closing), 침식(erosion), 팽창(dilation)의 순차적 처리를 통하여 자동적으로 추출된다. 결과적으로, GDPA, 허프 변환 또는 다른 알고리듬들과 비교시 하나의 이점이 된다. 모폴로지 처리와 같이 본 프로그램은 그레이 레벨 값의 범위에 기반하여 지형지물을 추출을 위한 density slicing 기능 또는 주어진 경계치 보다 작은 화소 군집을 제거하는 처리인 'sieve filtering'을 제공한다. 이러한 기능들은 형태학적으로 처리된 결과를 증대하고 지형지물 종류들을 분리하는데 유용하다. 또한 배경의 제거, 잡음 탐지, 도시 환경 원격 탐사에서의 지형지물 특성화에 기여한다. 본 프로그램을 이용하는데 있어서 IKONOS 위성영상을 이용하여 시험 구현하였다. 결과, 다중 경계치 또는 steve filtering을 이용한 그레이 레벨 모폴로지 처리는 복잡한 지형지물과 많은 데이터로 구성된 고해상도 영상 내의 주어진 대상에서 자동적인 처리와 사용자 정의 sieve filtering으로 인한 효과적인 지형지물 추출 방법으로 간주 된다. 시안을 작성 표준화를 위한 첫 단계 시도를 소개하였다.분석 결과는 문장, 그림 및 도표, 장 끝의 질문, 학생의 학습 활동 수 등이 $0.4{\sim}1.5$ 사이의 값으로 학생 참여를 적절히 유도하는 발견 지향적 인 것으로 조사되었다. 그러나 장의 요약은 본문 내용을 반복하는 내용으로 구성되었다. 이와 같이 공통과학 과목은 새로운 현대 사회에 부응하는 교과 목표와 체계를 지향하고 있지만 아직도 통합과학으로서의 내용과 체계를 완전히 갖추고 있지 못할 뿐만 아니라 현재 사용되고 있는 7종의 교과서가 교육 목표를 충분히 반영하지 못하고 있다. 따라서 교사의 역할이 더욱더 중요하게 되었다.괴리가 작아진다. 이 결과에 따르면 위탁증거금의 징수는 그 제도의 취지에 부합되고 있다. 다만 제도운용상의 이유이거나 혹은 우리나라 주식시장의 투자자들이 비합리적인 투자형태를 보임에 따라 그 정책적 효과는 때로 역기능적인 결과로 초래하였다. 그럼에도 불구하고 이 연구결과를 통하여 최소한 주식시장(株式市場)에서 위탁증거금제도는 그 제도적 의의가 여전히 있다는 사실이 확인되었다. 또한 우리나라 주식시장에서 통상 과열투기 행위가 빈번히 일어나 주식시장을 교란시킴으로써 건전한 투자풍토조성에 저해된다는 저간의 우려가 매우 커왔으나 표본 기간동안에 대하여 실증분석을 한 결과 주식시장 전체적으로 볼 때 주가변동율(株價變動率), 특히 초과주가변동율(超過株價變動率)에 미치는 영향이 그다지 심각한 정도는 아니었으며 오히려 우리나라의 주식시장은 미국시장에 비해 주가가 비교적 안정적인 수준을 유지해 왔다고 볼 수 있다.36.4%)와 외식을 선호(29.1%)${\lrcorner}$ 하기 때문에 패스트푸드를 이용하게 된 것으로 응답 하였으며,

  • PDF

Range Design of Pulse Repetition Frequency for Removal of SAR Residual Image (영상레이더 잔상 제거를 위한 펄스 반복 주파수의 범위 설계)

  • Kim, Kyeong-Rok;Heo, Min-Wook;Kim, Tu-Hwan;Ryu, Sang-Burm;Lee, Sang-Gyu;Lee, Hyeon-Cheol;Kim, Jae-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1653-1660
    • /
    • 2016
  • The synthetic aperture rardar (SAR) is an active sensor using microwaves. It transmits a microwave signal, called a chirp pulse, and receives the reflected signal in a moving platform such as satellite and unmanned aerial vehicle. Since this sensor uses microwaves that can penetrate the atmosphere, SAR generates the images regardless of light and weather conditions. However SAR operates on the moving platform, the Doppler shift and the side-looking observation method should be considered. In addtion, a residual image or ghost image can be occurred according to selection of the pulse repetition frequency (PRF). In this paper, a range design of the PRF for the L-band spaceborne SAR system is studied for prevention of SAR image distortion. And the system is studied for prevention of SAR image distortion. And the system parameter and the PRF are calibrated iteratively according to the proposed system design procedure and design constraints. The MATLAB based on SAR system simulator has been developed to verify the validity of calculated PRF. The developed simulator assumes that SAR sensor is operated by the PRF calculated from the design. The results of the simulator show that the targets in image has a valid peak to side-lobe ratio (PSLR) so that the PRF can be used for the spaceborne SAR sensor.

Channel characteristics of multi-path power line using a contactless inductive coupling unit (비접촉식 유도성 결합기를 이용한 다중경로 전력선 채널 특성)

  • Kim, Hyun-Sik;Sohn, Kyung-Rak
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.9
    • /
    • pp.799-804
    • /
    • 2016
  • Broadband powerline communication (BPLC) uses distribution lines as a medium for achieving effective bidirectional data communication along with electric current flow. As the material characteristics of power lines are not good at the communication channel, the development of power line communication (PLC) systems for internet, voice, and data services requires measurement-based models of the transfer characteristics of the network suitable for performance analysis by simulation. In this paper, an analytic model describing a complex transfer function is presented to obtain the attenuation and path parameters for a multipath power line model. The calculated results demonstrated frequency-selective fading in multipath channels and signal attenuation with frequency, and were in good agreement with the experimental results. Inductive coupling units are used as couplers for coupling the signal to the power line to avoid physical connections to the distribution line. The inductance of the ferrite core, which depends on the frequency, determines the cut-off frequency of the inductive coupler. Coupling loss can be minimized by increasing the number of windings around the coupler. Coupling efficiency was improved by more than 6 dB with three windings compared to the results obtained with one winding.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Estimation of Internal Motion for Quantitative Improvement of Lung Tumor in Small Animal (소동물 폐종양의 정량적 개선을 위한 내부 움직임 평가)

  • Yu, Jung-Woo;Woo, Sang-Keun;Lee, Yong-Jin;Kim, Kyeong-Min;Kim, Jin-Su;Lee, Kyo-Chul;Park, Sang-Jun;Yu, Ran-Ji;Kang, Joo-Hyun;Ji, Young-Hoon;Chung, Yong-Hyun;Kim, Byung-Il;Lim, Sang-Moo
    • Progress in Medical Physics
    • /
    • v.22 no.3
    • /
    • pp.140-147
    • /
    • 2011
  • The purpose of this study was to estimate internal motion using molecular sieve for quantitative improvement of lung tumor and to localize lung tumor in the small animal PET image by evaluated data. Internal motion has been demonstrated in small animal lung region by molecular sieve contained radioactive substance. Molecular sieve for internal lung motion target was contained approximately 37 kBq Cu-64. The small animal PET images were obtained from Siemens Inveon scanner using external trigger system (BioVet). SD-Rat PET images were obtained at 60 min post injection of FDG 37 MBq/0.2 mL via tail vein for 20 min. Each line of response in the list-mode data was converted to sinogram gated frames (2~16 bin) by trigger signal obtained from BioVet. The sinogram data was reconstructed using OSEM 2D with 4 iterations. PET images were evaluated with count, SNR, FWHM from ROI drawn in the target region for quantitative tumor analysis. The size of molecular sieve motion target was $1.59{\times}2.50mm$. The reference motion target FWHM of vertical and horizontal was 2.91 mm and 1.43 mm, respectively. The vertical FWHM of static, 4 bin and 8 bin was 3.90 mm, 3.74 mm, and 3.16 mm, respectively. The horizontal FWHM of static, 4 bin and 8 bin was 2.21 mm, 2.06 mm, and 1.60 mm, respectively. Count of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.10, 4.83, 5.59, 5.38, and 5.31, respectively. The SNR of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.18, 4.05, 4.22, 3.89, and 3.58, respectively. The FWHM were improved in accordance with gate number increase. The count and SNR were not proportionately improve with gate number, but shown the highest value in specific bin number. We measured the optimal gate number what minimize the SNR loss and gain improved count when imaging lung tumor in small animal. The internal motion estimation provide localized tumor image and will be a useful method for organ motion prediction modeling without external motion monitoring system.

Determinants of Mobile Application Use: A Study Focused on the Correlation between Application Categories (모바일 앱 사용에 영향을 미치는 요인에 관한 연구: 앱 카테고리 간 상관관계를 중심으로)

  • Park, Sangkyu;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.157-176
    • /
    • 2016
  • For a long time, mobile phone had a sole function of communication. Recently however, abrupt innovations in technology allowed extension of the sphere in mobile phone activities. Development of technology enabled realization of almost computer-like environment even on a very small device. Such advancement yielded several forms of new high-tech devices such as smartphone and tablet PC, which quickly proliferated. Simultaneously with the diffusion of the mobile devices, mobile applications for those devices also prospered and soon became deeply penetrated in consumers' daily lives. Numerous mobile applications have been released in app stores yielding trillions of cumulative downloads. However, a big majority of the applications are disregarded from consumers. Even after the applications are purchased, they do not survive long in consumers' mobile devices and are soon abandoned. Nevertheless, it is imperative for both app developers and app-store operators to understand consumer behaviors and to develop marketing strategies aiming to make sustainable business by first increasing sales of mobile applications and by also designing surviving strategy for applications. Therefore, this research analyzes consumers' mobile application usage behavior in a frame of substitution/supplementary of application categories and several explanatory variables. Considering that consumers of mobile devices use multiple apps simultaneously, this research adopts multivariate probit models to explain mobile application usage behavior and to derive correlation between categories of applications for observing substitution/supplementary of application use. The research adopts several explanatory variables including sociodemographic data, user experiences of purchased applications that reflect future purchasing behavior of paid applications as well as consumer attitudes toward marketing efforts, variables representing consumer attitudes toward rating of the app and those representing consumer attitudes toward app-store promotion efforts (i.e., top developer badge and editor's choice badge). Results of this study can be explained in hedonic and utilitarian framework. Consumers who use hedonic applications, such as those of game and entertainment-related, are of young age with low education level. However, consumers who are old and have received higher education level prefer utilitarian application category such as life, information etc. There are disputable arguments over whether the users of SNS are hedonic or utilitarian. In our results, consumers who are younger and those with higher education level prefer using SNS category applications, which is in a middle of utilitarian and hedonic results. Also, applications that are directly related to tangible assets, such as banking, stock and mobile shopping, are only negatively related to experience of purchasing of paid app, meaning that consumers who put weights on tangible assets do not prefer buying paid application. Regarding categories, most correlations among categories are significantly positive. This is because someone who spend more time on mobile devices tends to use more applications. Game and entertainment category shows significant and positive correlation; however, there exists significantly negative correlation between game and information, as well as game and e-commerce categories of applications. Meanwhile, categories of game and SNS as well as game and finance have shown no significant correlations. This result clearly shows that mobile application usage behavior is quite clearly distinguishable - that the purpose of using mobile devices are polarized into utilitarian and hedonic purpose. This research proves several arguments that can only be explained by second-hand real data, not by survey data, and offers behavioral explanations of mobile application usage in consumers' perspectives. This research also shows substitution/supplementary patterns of consumer application usage, which then explain consumers' mobile application usage behaviors. However, this research has limitations in some points. Classification of categories itself is disputable, for classification is diverged among several studies. Therefore, there is a possibility of change in results depending on the classification. Lastly, although the data are collected in an individual application level, we reduce its observation into an individual level. Further research will be done to resolve these limitations.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.