• Title/Summary/Keyword: web-based 5D system

Search Result 78, Processing Time 0.032 seconds

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Application of Terrestrial LiDAR for Reconstructing 3D Images of Fault Trench Sites and Web-based Visualization Platform for Large Point Clouds (지상 라이다를 활용한 트렌치 단층 단면 3차원 영상 생성과 웹 기반 대용량 점군 자료 가시화 플랫폼 활용 사례)

  • Lee, Byung Woo;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.54 no.2
    • /
    • pp.177-186
    • /
    • 2021
  • For disaster management and mitigation of earthquakes in Korea Peninsula, active fault investigation has been conducted for the past 5 years. In particular, investigation of sediment-covered active faults integrates geomorphological analysis on airborne LiDAR data, surface geological survey, and geophysical exploration, and unearths subsurface active faults by trench survey. However, the fault traces revealed by trench surveys are only available for investigation during a limited time and restored to the previous condition. Thus, the geological data describing the fault trench sites remain as the qualitative data in terms of research articles and reports. To extend the limitations due to temporal nature of geological studies, we utilized a terrestrial LiDAR to produce 3D point clouds for the fault trench sites and restored them in a digital space. The terrestrial LiDAR scanning was conducted at two trench sites located near the Yangsan Fault and acquired amplitude and reflectance from the surveyed area as well as color information by combining photogrammetry with the LiDAR system. The scanned data were merged to form the 3D point clouds having the average geometric error of 0.003 m, which exhibited the sufficient accuracy to restore the details of the surveyed trench sites. However, we found more post-processing on the scanned data would be necessary because the amplitudes and reflectances of the point clouds varied depending on the scan positions and the colors of the trench surfaces were captured differently depending on the light exposures available at the time. Such point clouds are pretty large in size and visualized through a limited set of softwares, which limits data sharing among researchers. As an alternative, we suggested Potree, an open-source web-based platform, to visualize the point clouds of the trench sites. In this study, as a result, we identified that terrestrial LiDAR data can be practical to increase reproducibility of geological field studies and easily accessible by researchers and students in Earth Sciences.

Comparative Study on the Methodology of Motor Vehicle Emission Calculation by Using Real-Time Traffic Volume in the Kangnam-Gu (자동차 대기오염물질 산정 방법론 설정에 관한 비교 연구 (강남구의 실시간 교통량 자료를 이용하여))

  • 박성규;김신도;이영인
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.4
    • /
    • pp.35-47
    • /
    • 2001
  • Traffic represents one of the largest sources of primary air pollutants in urban area. As a consequence. numerous abatement strategies are being pursued to decrease the ambient concentration of pollutants. A characteristic of most of the these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emission inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for vehicle types. The majority of inventories are compiled using passive data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. The study of current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this study, a methodology of motor vehicle emission calculation by using real-time traffic data was studied. A methodology for estimating emissions of CO at a test area in Seoul. Traffic data, which are required on a street-by-street basis, is obtained from induction loops of traffic control system. It was calculated speed-related mass of CO emission from traffic tail pipe of data from traffic system, and parameters are considered, volume, composition, average velocity, link length. And, the result was compared with that of a method of emission calculation by VKT(Vehicle Kilometer Travelled) of vehicles of category.

  • PDF

The Relationship between Visual Stress and MBTI Personality Types (시각적 스트레스와 MBTI 성격유형과의 관계)

  • Kim, Sun-Uk;Han, Seung-Jo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.4036-4044
    • /
    • 2012
  • This study is aimed to investigate the association between web-based visual stress and MBTI personality types. The stressor deriving visual stress is built by 14 vowels out of Korean alphabet as a content and parallel striples as a background on the screen, which is given to each subject during 5min. The dependent variable indicating how much human takes visual stress is the reduction rate of flicker fusion frequency, which is evaluated with visual flicker fusion frequency tester. The independent variables are gender and 8 MBTI personality types(E-I, S-N, T-F, and J-P), and hypotheses are based on human information processing model and previous studies. The results address that the reduction rate is not significantly affected by gender, S-N, and J-P, but E-I and T-F have significant influences on it. The reduction rate in I-type is almost 2 times as much as that in E-type and T-type has the rate 2.2 times more than F-type. This study can be applicable to determine the adequate personnel for jobs requiring less sensibility to visual stressors in areas that human error may lead to critical damages to an overall system.

Modeling the effects of excess water on soybean growth in converted paddy field in Japan 1. Predicting groundwater level and soil moisture condition - The case of Biwa lake reclamation area

  • Kato, Chihiro;Nakano, Satoshi;Endo, Akira;Sasaki, Choichi;Shiraiwa, Tatsuhiko
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.315-315
    • /
    • 2017
  • In Japan, more than 80 % of soybean growing area is converted fields and excess water is one of the major problems in soybean production. For example, recent study (Yoshifuji et al., 2016) suggested that in the fields of shallow groundwater level (GWL) (< 1m depth), rising GWL even in a short period (e.g. 1 day) causes inhibition of soybean growth. Thus it becomes more and more important to predict GWL and soil moisture in detail. In addition to conventional surface drainage and underdrain, FOEAS (Farm Oriented Enhancing Aquatic System), which is expected to control GWL in fields adequately, has been developed recently. In this study we attempted to predict GWL and soil moisture condition at the converted field with FOEAS in Biwa lake reclamation area, Shiga prefecture, near the center of the main island of Japan. Two dimensional HYDRUS model (Simuinek et al., 1999) based on common Richards' equation, was used for the calculation of soil water movement. The calculation domain was considered to be 10 and 5 meter in horizontal and vertical direction, respectively, with two layers, i.e. 20cm-thick of plowed layer and underlying subsoil layer. The center of main underdrain (10 cm in diameter) was assumed to be 5 meter from the both ends of the domain and 10-60cm depth from the surface in accordance with the field experiment. The hydraulic parameters of the soil was estimated with the digital soil map in "Soil information web viewer" and Agricultural soil-profile physical properties database, Japan (SolphyJ) (Kato and Nishimura, 2016). Hourly rainfall depth and daily potential evapo-transpiration rate data were given as the upper boundary condition (B.C.). For the bottom B.C., constant upward flux, which meant the inflow flux to the field from outside, was given. Seepage face condition was employed for the surrounding of the underdrain. Initial condition was employed as GWL=60cm. Then we compared the simulated and observed results of volumetric water content at depth of 15cm and GWL. While the model described the variation of GWL well, it tended to overestimate the soil moisture through the growing period. Judging from the field condition, and observed data of soil moisture and GWL, consideration of soil structure (e.g. cracks and clods) in determination of soil hydraulic parameters at the plowed layer may improve the simulation results of soil moisture.

  • PDF

Preliminary Report of the $1998{\sim}1999$ Patterns of Care Study of Radiation Therapy for Esophageal Cancer in Korea (식도암 방사선 치료에 대한 Patterns of Care Study ($1998{\sim}1999$)의 예비적 결과 분석)

  • Hur, Won-Joo;Choi, Young-Min;Lee, Hyung-Sik;Kim, Jeung-Kee;Kim, Il-Han;Lee, Ho-Jun;Lee, Kyu-Chan;Kim, Jung-Soo;Chun, Mi-Son;Kim, Jin-Hee;Ahn, Yong-Chan;Kim, Sang-Gi;Kim, Bo-Kyung
    • Radiation Oncology Journal
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: For the first time, a nationwide survey in the Republic of Korea was conducted to determine the basic parameters for the treatment of esophageal cancer and to offer a solid cooperative system for the Korean Pattern of Care Study database. $\underline{Materials\;and\;Methods}$: During $1998{\sim}1999$, biopsy-confirmed 246 esophageal cancer patients that received radiotherapy were enrolled from 23 different institutions in South Korea. Random sampling was based on power allocation method. Patient parameters and specific information regarding tumor characteristics and treatment methods were collected and registered through the web based PCS system. The data was analyzed by the use of the Chi-squared test. $\underline{Results}$: The median age of the collected patients was 62 years. The male to female ratio was about 91 to 9 with an absolute male predominance. The performance status ranged from ECOG 0 to 1 in 82.5% of the patients. Diagnostic procedures included an esophagogram (228 patients, 92.7%), endoscopy (226 patients, 91.9%), and a chest CT scan (238 patients, 96.7%). Squamous cell carcinoma was diagnosed in 96.3% of the patients; mid-thoracic esophageal cancer was most prevalent (110 patients, 44.7%) and 135 patients presented with clinical stage III disease. Fifty seven patients received radiotherapy alone and 37 patients received surgery with adjuvant postoperative radiotherapy. Half of the patients (123 patients) received chemotherapy together with RT and 70 patients (56.9%) received it as concurrent chemoradiotherapy. The most frequently used chemotherapeutic agent was a combination of cisplatin and 5-FU. Most patients received radiotherapy either with 6 MV (116 patients, 47.2%) or with 10 MV photons (87 patients, 35.4%). Radiotherapy was delivered through a conventional AP-PA field for 206 patients (83.7%) without using a CT plan and the median delivered dose was 3,600 cGy. The median total dose of postoperative radiotherapy was 5,040 cGy while for the non-operative patients the median total dose was 5,970 cGy. Thirty-four patients received intraluminal brachytherapy with high dose rate Iridium-192. Brachytherapy was delivered with a median dose of 300 cGy in each fraction and was typically delivered $3{\sim}4\;times$. The most frequently encountered complication during the radiotherapy treatment was esophagitis in 155 patients (63.0%). $\underline{Conclusion}$: For the evaluation and treatment of esophageal cancer patients at radiation facilities in Korea, this study will provide guidelines and benchmark data for the solid cooperative systems of the Korean PCS. Although some differences were noted between institutions, there was no major difference in the treatment modalities and RT techniques.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

An Experimental Study on Evaluation of Axially Compressive Buckling Strength of Corroded Temporary Steel (부식 손상된 가시설 강재의 축압축 좌굴강도 추정에 관한 실험적 연구)

  • Kim, In Tae;Lee, Myoung Jin;Shin, Chang Hee
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.15 no.6
    • /
    • pp.135-146
    • /
    • 2011
  • Steel structures have been generally painted to prevent corrosion damage. However, the painted film is deteriorated with increase in service life, and then corrosion damage resulting in cross sectional area occurs on steel surface. As a result, the buckling strength of steel structures can be decreased due to the corrosion damages. The evaluation method of the axial buckling strength of columns about a variety of section shapes and supporting conditions have been presented, but evaluation method of buckling strength about irregular nonprismatic columns is not established. In this study, the axial buckling strength of corroded steels was evaluated based on the buckling test results of corroded steel specimens that were cut off at a temporary steel structure. The corroded specimens were picked up total 10 specimens according to various slenderness ratio from the web of a temporary structure's main beam. The length of specimens is 200, 300, 400, 500 and 600mm respectively. The rust productions were removed by the chemical treatment. Then, the surface geometry was measured at intervals of $1{\times}1mm$ by using the optical 3D digitizing system, and the residual thickness of the specimens was calculated. The axial buckling test was performed on 10 corroded specimens and 12 non-corroded specimens under the fixed-fixed support condition. From the test results, the effect of corrosion damages on axial buckling load was investigated. Regardless of corrosion damage degree, the axial buckling strength of corroded specimens and non-corroded specimens was evaluated identically by using minimum average residual thickness or average residual thickness to minus its standard deviation. Reasonable measuring intervals of residual thickness was proposed by using the results to apply for practical works.