• Title/Summary/Keyword: next generation information system

Search Result 900, Processing Time 0.029 seconds

A 1280-RGB $\times$ 800-Dot Driver based on 1:12 MUX for 16M-Color LTPS TFT-LCD Displays (16M-Color LTPS TFT-LCD 디스플레이 응용을 위한 1:12 MUX 기반의 1280-RGB $\times$ 800-Dot 드라이버)

  • Kim, Cha-Dong;Han, Jae-Yeol;Kim, Yong-Woo;Song, Nam-Jin;Ha, Min-Woo;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.1
    • /
    • pp.98-106
    • /
    • 2009
  • This work proposes a 1280-RGB $\times$ 800-Dot 70.78mW 0.l3um CMOS LCD driver IC (LDI) for high-performance 16M-color low temperature poly silicon (LTPS) thin film transistor liquid crystal display (TFT-LCD) systems such as ultra mobile PC (UMPC) and mobile applications simultaneously requiring high resolution, low power, and small size at high speed. The proposed LDI optimizes power consumption and chip area at high resolution based on a resistor-string based architecture. The single column driver employing a 1:12 MUX architecture drives 12 channels simultaneously to minimize chip area. The implemented class-AB amplifier achieves a rail-to-rail operation with high gain and low power while minimizing the effect of offset and output deviations for high definition. The supply- and temperature-insensitive current reference is implemented on chip with a small number of MOS transistors. A slew enhancement technique applicable to next-generation source drivers, not implemented on this prototype chip, is proposed to reduce power consumption further. The prototype LDI implemented in a 0.13um CMOS technology demonstrates a measured settling time of source driver amplifiers within 1.016us and 1.072us during high-to-low and low-to-high transitions, respectively. The output voltage of source drivers shows a maximum deviation of 11mV. The LDI with an active die area of $12,203um{\times}1500um$ consumes 70.78mW at 1.5V/5.5V.

The study of stereoscopic editing process with applying depth information (깊이정보를 활용한 입체 편집 프로세스 연구)

  • Baek, Kwang-Ho;Kim, Min-Seo;Han, Myung-Hee
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.225-233
    • /
    • 2012
  • The 3D stereoscopic image contents have been emerging as the blue chip of the contents market of the next generation since the . However, all the 3D contents created commercially in the country have failed to enter box office. It is because the quality of Korean 3D contents is much lower than that of overseas contents and also current 3D post production process is based on 2D. Considering all these facts, the 3D editing process has connection with the quality of contents. The current 3D editing processes of the production case of are using the way that edits with the system on basis of 2D, followed by checking with 3D display system and modifying, if there are any problems. In order to improve those conditions, I suggest that the 3D editing process contain more objectivity by visualizing the depth data applied in some composition work such as Disparity map, Depth map, and the current 3D editing process. The proposed process has been used in the music drama , comparing with those of the film . The 3D values could be checked among cuts which have been changed a lot since those of , while the 3D value of drew an equal result in general. Since the current process is based on an artist's subjective sense of 3D, it could be changed according to the condition and state of the artist. Furthermore, it is impossible for us to predict the positive range, so it is apprehended that the cubic effect of space might be perverted by showing each different 3D value according to cuts in the same space or a limited space. On the other hand, the objective 3D editing by applying the visualization of depth data can adjust itself to the cubic effect of the same space and the whole content equally, which will enrich the 3D contents. It will even be able to solve some problems such as distortion of cubic effect and visual fatigue, etc.

School Experiences and the Next Gate Path : An analysis of Univ. Student activity log (대학생의 학창경험이 사회 진출에 미치는 영향: 대학생활 활동 로그분석을 중심으로)

  • YI, EUNJU;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.149-171
    • /
    • 2020
  • The period at university is to make decision about getting an actual job. As our society develops rapidly and highly, jobs are diversified, subdivided, and specialized, and students' job preparation period is also getting longer and longer. This study analyzed the log data of college students to see how the various activities that college students experience inside and outside of school might have influences on employment. For this experiment, students' various activities were systematically classified, recorded as an activity data and were divided into six core competencies (Job reinforcement competency, Leadership & teamwork competency, Globalization competency, Organizational commitment competency, Job exploration competency, and Autonomous implementation competency). The effect of the six competency levels on the employment status (employed group, unemployed group) was analyzed. As a result of the analysis, it was confirmed that the difference in level between the employed group and the unemployed group was significant for all of the six competencies, so it was possible to infer that the activities at the school are significant for employment. Next, in order to analyze the impact of the six competencies on the qualitative performance of employment, we had ANOVA analysis after dividing the each competency level into 2 groups (low and high group), and creating 6 groups by the range of first annual salary. Students with high levels of globalization capability, job search capability, and autonomous implementation capability were also found to belong to a higher annual salary group. The theoretical contributions of this study are as follows. First, it connects the competencies that can be extracted from the school experience with the competencies in the Human Resource Management field and adds job search competencies and autonomous implementation competencies which are required for university students to have their own successful career & life. Second, we have conducted this analysis with the competency data measured form actual activity and result data collected from the interview and research. Third, it analyzed not only quantitative performance (employment rate) but also qualitative performance (annual salary level). The practical use of this study is as follows. First, it can be a guide when establishing career development plans for college students. It is necessary to prepare for a job that can express one's strengths based on an analysis of the world of work and job, rather than having a no-strategy, unbalanced, or accumulating excessive specifications competition. Second, the person in charge of experience design for college students, at an organizations such as schools, businesses, local governments, and governments, can refer to the six competencies suggested in this study to for the user-useful experiences design that may motivate more participation. By doing so, one event may bring mutual benefits for both event designers and students. Third, in the era of digital transformation, the government's policy manager who envisions the balanced development of the country can make a policy in the direction of achieving the curiosity and energy of college students together with the balanced development of the country. A lot of manpower is required to start up novel platform services that have not existed before or to digitize existing analog products, services and corporate culture. The activities of current digital-generation-college-students are not only catalysts in all industries, but also for very benefit and necessary for college students by themselves for their own successful career development.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Handover Functional Architecture for Next Generation Wireless Networks (차세대 무선 네트워크를 위한 핸드오버 기능 구조 제안)

  • Baek, Joo-Young;Kim, Dong-Wook;Kim, Hyun-Jin;Choi, Yoon-Hee;Kim, Duk-Jin;Kim, Woo-Jae;Suh, Young-Joo;Kang, Suk-Yang;Kim, Kyung-Suk;Shin, Kyung-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.268-273
    • /
    • 2006
  • 차세대 무선 네트워크 (4G)는 새로운 무선 접속 기술의 개발과 함께 많은 연구가 필요한 분야이다. 그 중에서 특히 단말의 끊김없는 이동성을 제공해 주기 위한 핸드오버 기술이 가장 중요하다고 할 수 있다. 차세대 무선 네트워크는 새로운 무선 접속 기술과 함께 기존의 무선랜이나 이동통신망 등과 같이 사용될 것으로 예상되며, 네트워크 계층에서의 이동성 지원을 위하여 Mobile IPv6를 사용할 것으로 예상되는 네트워크이다. 이러한 네트워크에서 끊김없는 이동성을 제공해 주기 위해서는 현재까지 연구된 핸드오버 기능 및 구조에 대한 연구와 함께 보다 다양해진 네트워크 환경과 QoS 등을 고려한 종합적인 핸드오버 기능에 대한 연구가 필요하다. 본 논문에서는 차세대 무선 네트워크에서 단말의 끊김없는 핸드오버를 제공해 주기 위하여 필요한 기능들을 도출하고, 이들간의 유기적인 연관관계를 정의하여 다양한 네트워크 환경과 사용자의 우선순위, 어플리케이션의 QoS 요구 조건 등을 고려한 종합적인 핸드오버 기능 구조를 제안하고자 한다. 제안하는 핸드오버 구조는 Monitoring, Triggering, Handover의 세 가지 module로 나뉘어져 있으며, 각각은 필요에 따라 sub-module로 다시 세분화된다. 제안하는 핸드오버 구조의 가장 큰 특징은 핸드오버를 유발시킬 수 있는 여러 가지 요소를 종합적으로 고려하며 이들간의 수평적인 비교가 아닌 다단계 비교를 수행하여 보다 정확한 triggering이 가능하도록 한다. 또한 단말의 QoS 요구 사항을 보장하고 네트워크의 혼잡도(congestion) 및 부하 조절 (load balancing)을 위한 기능을 핸드오버 기능에 추가하여 효율적인 네트워크의 자원 사용이 가능하도록 설계하였다.서버로 분산처리하게 함으로써 성능에 대한 신뢰성을 향상 시킬 수 있는 Load Balancing System을 제안한다.할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate 영상과 비교하여 더 빠른 시간 안에 우수한 영상을 얻을 수 있었다. 그러므로 $^{99m}Tc$-transierrin이 감염 병소의 영상진단에 사용될 수 있을 것으로 기대된다.리를 정량화 하였다. 특히 선

  • PDF

Current status of Brassica A genome analysis (Brassica A genome의 최근 연구 동향)

  • Choi, Su-Ryun;Kwon, Soo-Jin
    • Journal of Plant Biotechnology
    • /
    • v.39 no.1
    • /
    • pp.33-48
    • /
    • 2012
  • As a scientific curiosity to understand the structure and the function of crops and experimental efforts to apply it to plant breeding, genetic maps have been constructed in various crops. Especially, in the case of Brassica crop, genetic mapping has been accelerated since genetic information of model plant $Arabidopsis$ was available. As a result, the whole $B.$ $rapa$ genome (A genome) sequencing has recently been done. The genome sequences offer opportunities to develop molecular markers for genetic analysis in $Brassica$ crops. RFLP markers are widely used as the basis for genetic map construction, but detection system is inefficiency. The technical efficiency and analysis speed of the PCR-based markers become more preferable for many form of $Brassica$ genome study. The massive sequence informative markers such as SSR, SNP and InDels are also available to increase the density of markers for high-resolution genetic analysis. The high density maps are invaluable resources for QTLs analysis, marker assisted selection (MAS), map-based cloning and comparative analysis within $Brassica$ as well as related crop species. Additionally, the advents of new technology, next-generation technique, have served as a momentum for molecular breeding. Here we summarize genetic and genomic resources and suggest their applications for the molecular breeding in $Brassica$ crop.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.