• Title/Summary/Keyword: Computer-Based

Search Result 33,552, Processing Time 0.063 seconds

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

The Effect on Air Transport Sector by Korea-China FTA and Aviation Policy Direction of Korea (한·중 FTA가 항공운송 부문에 미치는 영향과 우리나라 항공정책의 방향)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.83-138
    • /
    • 2017
  • Korea-China FTA entered into force on the 20th of December 2015, and one year elapsed after its effectuation as the FTA with China, our country's largest trading partner. Therefore, this study looks at the trends of air transport trade between Korea and China, and examines the contents of concessions to the air transport services sector in Korea-China FTA, and analyzes the impact on the air transport sector by Korea-China FTA, and proposes our country's aviation policy direction in order to respond to such impact. In 2016 the trends of air transport trade between Korea and China are as follows : The export amount of air transport trade to China was 40.03 billion dollars, down by 9.3% from the last year, and occupied 32.2% of the total export amount to China. The import amount of air transport trade from China was 24.26 billion dollars, down by 9.1% from the last year, and occupied 27.7% of the total import amount from China. The contents of concessions to the air transport services sector in Korea-China FTA are as follows : China made concessions to the aircraft repair and maintenance services and the computer reservation system services with limitations on market access and national treatment in the air transport services sector of the China Schedule of Specific Commitments of Korea-China FTA Chapter 8 Annex. Korea made concessions to the computer reservation system services, selling and marketing of air transport services, and aircraft repair and maintenance without limitations on market access and national treatment in the air transport services sector of the Korea Schedule of Specific Commitments of Korea-China FTA Chapter 8 Annex. The impact on the air transport sector by Korea-China FTA are as follows : As for the impact on the air passenger market, in 2016 the arrival passengers of the international flight from China were 9.96 million, up by 20.6% from the last year, and the departure passengers to China were 9.90 million, up by 34.8% from the last year. As for the impact on the air cargo market, in 2016 the exported goods volumes of air cargo to China were 105,220.2 tons, up by 6.6% from the last year, and imported goods volumes from China were 133,750.9 tons, up by 12.3% from the last year. Among the major items of exported air cargo to China, the exported goods volumes of benefited items in the Tariff Schedule of China of Korea-China FTA were increased, and among the major items of imported air cargo from China, the imported goods volumes of benefited items in the Tariff Schedule of Korea of Korea-China FTA were increased. As for the impact on the logistics market, in 2016 the handling performance of exported air cargo to China by domestic forwarders were 119,618 tons, down by 2.1% from the last year, and the handling performance of imported air cargo from China were 79,430 tons, down by 4.4% from the last year. In 2016 the e-commerce export amount to China were 109.16 million dollars, up by 27.7% from the last year, and the e-commerce import amount from China were 89.43 million dollars, up by 72% from the last year. The author proposes the aviation policy direction of Korea according to Korea-China FTA as follows : First, the open skies between Korea and China shall be pushed ahead. In June 2006 Korea and China concluded the open skies agreement within the scope of the third freedom and fourth freedom of the air for passenger and cargo in Sandong Province and Hainan Province of China, and agreed the full open skies of flights between the two countries from the summer season in 2010. However, China protested against the interpretation of the draft of the memorandum of understanding to the air services agreement, therefore the further open skies did not take place. Through the separate aviation talks with China from Korea-China FTA, the gradual and selective open skies of air passenger market and air cargo market shall be pushed ahead. Second, the competitiveness of air transport industry and airport shall be secured. As for the strengthening methods of the competitiveness of Korea's air transport industry, the support system for the strengthening of national air carriers' competitiveness shall be prepared, and the new basis for competition of national air carriers shall be made, and the strategic network based on national interest shall be built. As for the strengthening methods of the competitiveness of Korea's airports, particularly Incheon Airport, the competitiveness of the network for aviation demand creation shall be strengthened, and the airport facilities and safety infrastructure shall be expanded, and the new added value through the airport shall be created, and the world's No.1 level of services shall be maintained. Third, the competitiveness of aviation logistics enterprises shall be strengthened. As for the strengthening methods of the competitiveness of Korea's aviation logistics enterprises, as the upbringing strategy of higher added value in response to the industry trends changes, the new logistics market shall be developed, and the logistics infrastructure shall be expanded, and the logistics professionals shall be trained. Additionally, as the expanding strategy of global logistics market, the support system for overseas investment of logistics enterprises shall be built, and according to expanding the global transport network, the international cooperation shall be strengthened, and the network infrastructure shall be secured. As for the strengthening methods of aviation logistics competitiveness of Incheon Airport, the enterprises' demand of moving in the logistics complex shall be responded, and the comparative advantage in the field of new growth cargo shall be preoccupied, and the logistics hub's capability shall be strengthened, and the competitiveness of cargo processing speed in the airport shall be advanced. Forth, in the subsequent negotiation of Korea-China FTA, the further opening of air transport services sector shall be secured. In the subsequent negotiation being initiated within two years after entry into force of Korea-China FTA, it is necessary to ask for the further opening of the concessions of computer reservation system services, and aircraft repair and maintenance services in which the concessions level of air transport services sector by China is insufficient compared to the concessions level in the existing FTA concluded by China. In conclusion, in order to respond to the impact on Korea's air passenger market, air cargo market and aviation logistics market by Korea-China FTA, the following policy tasks shall be pushed ahead : Taking into consideration of national air carriers' competitiveness and nation's benefits, the gradual and selective open skies shall be pushed ahead, and the support system to strengthen the competitiveness of air transport industry and airport shall be built, and entry into aviation logistics market by logistics enterprises shall be expanded, and the preparations to ask for the further opening of air transport services sector, low in the concessions level by China shall be made.

  • PDF

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

A Case Study on Application of the Menu Engineering Technique in Government Offices Contract Foodservice (관공서급식소의 메뉴엔지니어링기법을 적용한 메뉴분석 사례연구)

  • Rho, Sung-Yoon
    • Journal of Nutrition and Health
    • /
    • v.42 no.1
    • /
    • pp.78-96
    • /
    • 2009
  • The purpose of this study was to analyze and evaluate the menu served in government offices foodservice by using Kasavana & Smith's Menu-Engineering. Sales and food costs were collected from the daily sales reports for a year from Jan 2 to Dec 31 in 2007. Calculation for menu analysis and customer's data were done by computer using the MS 2003 Excel spreadsheet program and SPSS 12.0 package program. Menu mix% (MM%) and unit contribution margin were used as variables by Kasavana & Smith. Four possible classifications by Menu-Engineering technique were turned out as 'STAR', 'PLOWHORSE', 'PUZZLE', 'DOG'. The main menus served during a year were 128 dishes and about 141 peoples visited this restaurant daily. The mean age of the men was $44.1\;{\pm}\;6.3$, women were $32.7\;{\pm}\;6.4$ and showed that was statistically higher than that of women (p < .0001). The rates of STAR menus were 'Western style (75.0%)', 'guk/tang-ryu (48.1%)', 'jjigae/ jeongol-ryu (23.1%)', 'bap-ryu (17.2%)' in sequence. There were no STAR menus in gui/jorim/jjim-ryu. PLOWHORSE menus were 'gui-ryu (75.0%)', 'guk/tang-ryu (29.6%)', 'bap-ryu (27.6%)' in sequence. There were no PUZZLE or DOG menus in 'jjigae/jeongol-ryu'. PUZZLE menus were 'jorim/jjim-ryu and Myeonryu (each 33.3%)', 'bap-ryu (31.0%)' in sequence. PUZZLE menus were a lots of 'Chinese food (75.0%)' and 'myeonryu (55.6%)'. This study provides the basic data based on regularly menu analysis method applied the scientific menu analysis techniques in government offices food services, I'd like to suggest that the menu management must be done based on the necessity and result of menu analysis according to the seasonal and middle, long-term plans.

Dosimetry of the Low Fluence Fast Neutron Beams for Boron Neutron Capture Therapy (붕소-중성자 포획치료를 위한 미세 속중성자 선량 특성 연구)

  • Lee, Dong-Han;Ji, Young-Hoon;Lee, Dong-Hoon;Park, Hyun-Joo;Lee, Suk;Lee, Kyung-Hoo;Suh, So-Heigh;Kim, Mi-Sook;Cho, Chul-Koo;Yoo, Seong-Yul;Yu, Hyung-Jun;Gwak, Ho-Shin;Rhee, Chang-Hun
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.66-73
    • /
    • 2001
  • Purpose : For the research of Boron Neutron Capture Therapy (BNCT), fast neutrons generated from the MC-50 cyclotron with maximum energy of 34.4 MeV in Korea Cancer Center Hospital were moderated by 70 cm paraffin and then the dose characteristics were investigated. Using these results, we hope to establish the protocol about dose measurement of epi-thermal neutron, to make a basis of dose characteristic of epi-thermal neutron emitted from nuclear reactor, and to find feasibility about accelerator-based BNCT. Method and Materials : For measuring the absorbed dose and dose distribution of fast neutron beams, we used Unidos 10005 (PTW, Germany) electrometer and IC-17 (Far West, USA), IC-18, ElC-1 ion chambers manufactured by A-150 plastic and used IC-l7M ion chamber manufactured by magnesium for gamma dose. There chambers were flushed with tissue equivalent gas and argon gas and then the flow rate was S co per minute. Using Monte Carlo N-Particle (MCNP) code, transport program in mixed field with neutron, photon, electron, two dimensional dose and energy fluence distribution was calculated and there results were compared with measured results. Results : The absorbed dose of fast neutron beams was $6.47\times10^{-3}$ cGy per 1 MU at the 4 cm depth of the water phantom, which is assumed to be effective depth for BNCT. The magnitude of gamma contamination intermingled with fast neutron beams was $65.2{\pm}0.9\%$ at the same depth. In the dose distribution according to the depth of water, the neutron dose decreased linearly and the gamma dose decreased exponentially as the depth was deepened. The factor expressed energy level, $D_{20}/D_{10}$, of the total dose was 0.718. Conclusion : Through the direct measurement using the two ion chambers, which is made different wall materials, and computer calculation of isodose distribution using MCNP simulation method, we have found the dose characteristics of low fluence fast neutron beams. If the power supply and the target material, which generate high voltage and current, will be developed and gamma contamination was reduced by lead or bismuth, we think, it may be possible to accelerator-based BNCT.

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The Effect of Synchronous CMC Technology by Task Network: A Perspective of Media Synchronicity Theory (개인의 업무 네트워크 특성에 따른 동시적 CMC의 영향 : 매체 동시성 이론 관점)

  • Kim, Min-Soo;Park, Chul-Woo;Yang, Hee-Dong
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.21-43
    • /
    • 2008
  • The task network which is formed of different individuals can be recognized as a social network. Therefore, the way to communicate with people inside or outside the network has considerable influence on their outcome. Moreover, the position on which a member stands in a network shows the different effects of the information systems supporting communication with others. In this paper, it is to be studied how personal CMC (computer-mediated communication) tools affect the mission that those who work for a network perform through diverse task networks. Especially, we focused on synchronicity of CMC. On this score, the perspective of Media Synchronicity Theory was taken that had been suggested by criticizing Media Richness Theory. It is the objective, from this perspective, to find which characteristics of networks make the value of IT supporting synchronicity high. In the research trends of social networks, there have been two traditional perspectives to explain the effect of network: embeddedness and diversity ones. These differ from the aspect which type of social network can provide much more economic benefits. As similar studies have been reported by various researchers, these are also divided into the bonding and bridging views which are based on internal and external tie, respectively, Size, density, and centrality were measured as the characteristics of personal task networks. Size means the level of relationship between members. It is the total number of other colleagues who work with a specific member for a certain project. It means, the larger the size of task network, the more the number of coworkers who interact each other through the job. Density is the ratio of the number of relationships arranged actually to the total number of available ones. In an ego-centered network, it is defined as the ratio of the number of relationship made really to the total number of possible ones between members who are actually involved each other. The higher the level of density, the larger the number of projects on which the members collaborate. Centrality means that his/her position is on the exact center of whole network. There are several methods to measure it. In this research, betweenness centrality was adopted among them. It is measured by the position on which one member stands between others in a network. The determinant to raise its level is the shortest geodesic that represents the shortest distance between members. Centrality also indicates the level of role as a broker among others. To verify the hypotheses, we interviewed and surveyed a group of employees of a nationwide financial organization in which a groupware system is used. They were questioned about two CMC applications: MSN with a higher level of synchronicity and email with a lower one. As a result, the larger the size of his/her own task network, the smaller its density and the higher the level of his/her centrality, the higher the level of the effect using the task network with CMC tools. Above all, this positive effect is verified to be much more produced while using CMC applications with higher-level synchronicity. Among the a variety of situations under which the use of CMC gives more benefits, this research is considered as one of rare cases regarding the characteristics of task network as moderators by focusing ITs for the operation of his/her own task network. It is another contribution of this research to prove empirically that the values of information system depend on the social, or comparative, characteristic of time. Though the same amount of time is shared, the social characteristics of users change its value. In addition, it is significant to examine empirically that the ITs with higher-level synchronicity have the positive effect on productivity. Many businesses are worried about the negative effect of synchronous ITs, for their employees are likely to use them for personal social activities. However. this research can help to dismiss the concern against CMC tools.