• Title/Summary/Keyword: IT application technology

Search Result 9,078, Processing Time 0.063 seconds

A Study on the Applicability of Social Security Platform to Smart City (사회보장플랫폼과 스마트시티에의 적용가능성에 관한 연구)

  • Jang, Bong-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.321-335
    • /
    • 2020
  • Given that with the development of the 4th industry, interest and desire for smart cities are gradually increasing and related technologies are developed as a way to strengthen urban competitiveness by utilizing big data, information and communication technology, IoT, M2M, and AI, the purpose of this study is to find out how to achieve this goal on the premise of the idea of smart well fair city. In other words, the purpose is to devise a smart well-fair city in the care area, such as health care, medical care, and welfare, and see if it is feasible. With this recognition, the paper aimed to review the concept and scope of smart city, the discussions that have been made so far and the issues or limitations on its connection to social security and social welfare, and based on it, come up with the concept of welfare city. As a method of realizing the smart welfare city, the paper reviewed characteristics and features of a social security platform as well as the applicability of smart city, especially care services. Furthermore, the paper developed discussions on the standardization of the city in terms of political and institutional improvements, utilization of personal information and public data as well as ways of institutional improvement centering on social security information system. This paper highlights the importance of implementing the digitally based community care and smart welfare city that our society is seeking to achieve. With regard to the social security platform based on behavioral design and the 7 principles(6W1H method), the present paper has the limitation of dealing only with smart cities in the fields of healthcare, medicine, and welfare. Therefore, further studies are needed to investigate the effects of smart cities in other fields and to consider the application and utilization of technologies in various aspects and the corresponding impact on our society. It is expected that this paper will suggest the future course and vision not only for smart cities but also for the social security and welfare system and thereby make some contribution to improving the quality of people's lives through the requisite adjustments made in each relevant field.

Barium Compounds through Monte Carlo Simulations Compare the Performance of Medical Radiation Shielding Analysis (몬테카를로 시뮬레이션을 통한 바륨화합물의 의료방사선 차폐능 비교 분석)

  • Kim, Seonchil;Kim, Kyotae;Park, Jikoon
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.6
    • /
    • pp.403-408
    • /
    • 2013
  • This study made a tentative estimation of the shielding rate of barium compound by thickness through monte carlo simulation to apply medical radiation shielding products that can replace existing lead. Barium sulfate($BaSO_4$) was used for the shielding material, and thickness of the shielding material specimen was simulated from 0.1 mm to 5 mm by applying $15{\times}15cm^2$ of specimen area, $4.5g/cm^3$ of density of barium sulfate, and $11.34g/cm^3$ density of lead. Entered source was simulated with 10kVp Step in consecutive X-ray energy spectrum(40 kVp ~ 120 kVp). Absorption probability in 40 kVp ~ 60 kVp showed same shielding rate with lead in 3 mm ~ 5 mm of thickness, but it was identified that under 2 mm, the shielding rate was a bit lower than the existing lead shielding material. Also, the shielding rate in 70 kVp ~ 120 kVp energy band showed similar performance as the existing lead shielding material, but it was tentatively estimated as fairly low shielding rate below 0.5 mm. This study estimated the shielding rate of barium compound as the thickness function of x-ray energy band for medical radiation through monte carlo simulation, and made comparative analysis with existing lead. Also, this study intended to verify application validity of the x-ray shielding material for medical radiation of pure barium sulfate. As a result, it was estimated that the shielding effect was 95% higher than the existing lead 1.5 mm in at least 2 mm thickness of barium compound in medical radiation energy band 70 kVp ~ 120 kVp, and this result is considered valid to be provided as a base data in weight lightening production of radiation shielding product for medical radiation.

Analysis of the Genetic Diversity of Radish Germplasm through SSR Markers Derived from Chinese Cabbage (배추 SSR 마커를 이용한 무의 육성 계통 및 수집종의 유전적 다양성 분석)

  • Park, Suhyoung;Choi, Su Ryun;Lee, Jung-Soo;Nguyen, Van Dan;Kim, Sunggil;Lim, Yong Pyo
    • Horticultural Science & Technology
    • /
    • v.31 no.4
    • /
    • pp.457-466
    • /
    • 2013
  • Since the early 1980s, the National Institute of Horticultural & Herbal Sciences has been breeding and collecting diverse radish breeds to select those samples with better horticultural characteristics, to ultimately expand and develop as good radish produce. Genetic diversity is a crucial factor in crop improvement and therefore it is very important to obtain various variations through sample collection. The collected samples were compared with one another in order to assess the level of diversity among the collections, and this procedure allowed for increased application of the gathered resources and aided in determining the direction to secure further samples. Towards this end, this experiment was conducted in order to examine whether the SSR markers derived from Chinese cabbage samples could be transferred to the radish samples. Among the radish breeding lines and introduced resources, 44 lines were used as materials to analyze the genotype using 22 SSR markers selected. As a result, the analysis showed that among all the selected markers, 'cnu_m139' and 'cnu_m289' were the most useful markers for diversity evaluation. The genetic relationship of the radish genetic resources showed that the geographic origins affected the diversity. Furthermore, the different types of radish groups were also determined by the year they were bred. This result demonstrated that there are differences between the older radish breeds and the more recently developed radish breeds. Even though a relatively small number of markers were used in the analysis, it was possible to distinguish whether the radish was bred 30 years ago or in the 2000s, and that the similar physical shapes comprised a particular group, showed that the SSR markers can indeed be successfully applied to to study the diversity within radish breeding lines. Through the results of this study, it can be concluded that the SSR marker developed for the Chinese cabbage can be applied to examine the genetic diversity and analyze the relationship (genetic resource determination) of radish.

Flexible Specialization: A New Paradigm for Modern Industrial Society ? (柔軟的 專門化(Flexible Specialization) : 현대 産業社會의 새로운 패러다임 ?)

  • Lee, Deog-An
    • Journal of the Korean Geographical Society
    • /
    • v.28 no.2
    • /
    • pp.148-162
    • /
    • 1993
  • There is much speculation that modern capi-talist society is undergoing fundamental and qualitative chnge towards flexible specialization. The purpose of this study is to examine this hypothesis. This paper focusses on: the idea of flexible specialization; the significance of this transition; industrial district; and the implicati-ons of this new production system for Korean industrial space. Main arguments of this study are as follows: First, as all different groups of researchers apply the idea of flexible specialization according to their own specifications, the current debate on this topic is not much fruitful. Not surpri-singly, the concept of flexible specialization has overlapped with subocontracting. This intergration of subcontracting into flexible specialization systems, however, is inappropriate because the two concepts have different historical contexts. The other cause of this controversy is its inherent weekness, conceptual ambiguity. Thus, today's flexibility becomes tomorrow's rigidity. Secondly, transition towards flexible speciali-zation has only been partially achieved even in advanced capitalist countries. The application of dualistic explanatory framework, such as rigidity versus flexibiity, mass production versus small-lot multi-product production, and de-skilling versus re-skilling, has resulted in great exaggeration of the transformation, from Fordism to post-Fordism. There is no intermediary part between two places. Considering that the workers allocated to the Fordist mass production assembly line are not as large as one might imagine, the shift from mass to flexible production has only limited implications for the transformation of capitalist economy. Thirdly, 'industrial district' contorversy has contributed to highlighting the importance of small firms and areas as production space. The agglomeration of small firms in specific areas is common in Korea, but it is quite different from the industrial district based on flexible specialization. The Korean phenomenon stems from close interactions with its major parent firm rather than interactions between flexible, specialized, autonomous and technology-intensive smll firms. Most Korean subcontractors are still low-skilled, labour-intensive, and heavily dependent on their mojor parent firms. Thus, the assertion that the Seoul Metropolitan Area adopts flexible specialization has no base. Fourthly, the main concern of flexible speciali zation is small firms. However, the corporate organization that needs product diversification and technological specialization is oligopolistic large corporations typified by multinational corporations. It is because of this that most of these organizations are adoptiong Fordist mass production methods. The problem of product diversification will be resolved naturally if economic internationalization progresses further. What is more important for business success is the quality and price competitiveness of firms rather than product diversification. Lastly, in order to dispel further misunderst-anding on this issue, it is imparative that the conceptual ambiguity is resolved most urgently. This study recommends adoption of more speci-fied and direct terminology (such as, factory automation, computer design, out-sourcing, the exploitation of part-time labor, job redesign) rather than that of ideological ones (such as, Taylorism, Fordism, neo-Taylorism, neo-Fordism, post-fordism, flexible specialization, peripheral post-Fordism). As the debates on this topic just started, we still have long way to go until consensus is reached.

  • PDF

A Study on the Potential Use of ChatGPT in Public Design Policy Decision-Making (공공디자인 정책 결정에 ChatGPT의 활용 가능성에 관한연구)

  • Son, Dong Joo;Yoon, Myeong Han
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.172-189
    • /
    • 2023
  • This study investigated the potential contribution of ChatGPT, a massive language and information model, in the decision-making process of public design policies, focusing on the characteristics inherent to public design. Public design utilizes the principles and approaches of design to address societal issues and aims to improve public services. In order to formulate public design policies and plans, it is essential to base them on extensive data, including the general status of the area, population demographics, infrastructure, resources, safety, existing policies, legal regulations, landscape, spatial conditions, current state of public design, and regional issues. Therefore, public design is a field of design research that encompasses a vast amount of data and language. Considering the rapid advancements in artificial intelligence technology and the significance of public design, this study aims to explore how massive language and information models like ChatGPT can contribute to public design policies. Alongside, we reviewed the concepts and principles of public design, its role in policy development and implementation, and examined the overview and features of ChatGPT, including its application cases and preceding research to determine its utility in the decision-making process of public design policies. The study found that ChatGPT could offer substantial language information during the formulation of public design policies and assist in decision-making. In particular, ChatGPT proved useful in providing various perspectives and swiftly supplying information necessary for policy decisions. Additionally, the trend of utilizing artificial intelligence in government policy development was confirmed through various studies. However, the usage of ChatGPT also unveiled ethical, legal, and personal privacy issues. Notably, ethical dilemmas were raised, along with issues related to bias and fairness. To practically apply ChatGPT in the decision-making process of public design policies, first, it is necessary to enhance the capacities of policy developers and public design experts to a certain extent. Second, it is advisable to create a provisional regulation named 'Ordinance on the Use of AI in Policy' to continuously refine the utilization until legal adjustments are made. Currently, implementing these two strategies is deemed necessary. Consequently, employing massive language and information models like ChatGPT in the public design field, which harbors a vast amount of language, holds substantial value.

A study on performance evaluation of fiber reinforced concrete using PET fiber reinforcement (PET 섬유 보강재를 사용한 섬유 보강 콘크리트의 성능 평가에 관한 연구)

  • Ri-On Oh;Yong-Sun Ryu;Chan-Gi Park;Sung-Ki Park
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.4
    • /
    • pp.261-283
    • /
    • 2023
  • This study aimed to review the performance stability of PET (Polyethylene terephthalate) fiber reinforcing materials among the synthetic fiber types for which the application of performance reinforcing materials to fiber-reinforced concrete is being reviewed by examining short-term and long-term performance changes. To this end, the residual performance was analyzed after exposing the PET fiber to an acid/alkali environment, and the flexural strength and equivalent flexural strength of the PET fiber-reinforced concrete mixture by age were analyzed, and the surface of the PET fiber collected from the concrete specimen was examined using a scanning microscope (SEM). The changes in were analyzed. As a result of the acid/alkali environment exposure test of PET fiber, the strength retention rate was 83.4~96.4% in acidic environment and 42.4~97.9% in alkaline environment. It was confirmed that the strength retention rate of the fiber itself significantly decreased when exposed to high-temperature strong alkali conditions, and the strength retention rate increased in the finished yarn coated with epoxy. In the test results of the flexural strength and equivalent flexural strength of the PET fiber-reinforced concrete mixture, no reduction in flexural strength was found, and the equivalent flexural strength result also did not show any degradation in performance as a fiber reinforcement. Even in the SEM analysis results, no surface damage or cross-sectional change of the PET reinforcing fibers was observed. These results mean that no damage or cross-section reduction of PET reinforcing fibers occurs in cement concrete environments even when fiber-reinforced concrete is exposed to high temperatures in the early stage or depending on age, and the strength of PET fibers decreases in cement concrete environments. The impact is judged to be of no concern. As the flexural strength and equivalent flexural strength according to age were also stably expressed, it could be seen that performance degradation due to hydrolysis, which is a concern due to the use of PET fiber reinforcing materials, did not occur, and it was confirmed that stable residual strength retention characteristics were exhibited.

Text Mining-Based Emerging Trend Analysis for e-Learning Contents Targeting for CEO (텍스트마이닝을 통한 최고경영자 대상 이러닝 콘텐츠 트렌드 분석)

  • Kyung-Hoon Kim;Myungsin Chae;Byungtae Lee
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.1-19
    • /
    • 2017
  • Original scripts of e-learning lectures for the CEOs of corporation S were analyzed using topic analysis, which is a text mining method. Twenty-two topics were extracted based on the keywords chosen from five-year records that ranged from 2011 to 2015. Research analysis was then conducted on various issues. Promising topics were selected through evaluation and element analysis of the members of each topic. In management and economics, members demonstrated high satisfaction and interest toward topics in marketing strategy, human resource management, and communication. Philosophy, history of war, and history demonstrated high interest and satisfaction in the field of humanities, whereas mind health showed high interest and satisfaction in the field of in lifestyle. Studies were also conducted to identify topics on the proportion of content, but these studies failed to increase member satisfaction. In the field of IT, educational content responds sensitively to change of the times, but it may not increase the interest and satisfaction of members. The present study found that content production for CEOs should draw out deep implications for value innovation through technology application instead of simply ending the technical aspect of information delivery. Previous studies classified contents superficially based on the name of content program when analyzing the status of content operation. However, text mining can derive deep content and subject classification based on the contents of unstructured data script. This approach can examine current shortages and necessary fields if the service contents of the themes are displayed by year. This study was based on data obtained from influential e-learning companies in Korea. Obtaining practical results was difficult because data were not acquired from portal sites or social networking service. The content of e-learning trends of CEOs were analyzed. Data analysis was also conducted on the intellectual interests of CEOs in each field.

Medical Information Dynamic Access System in Smart Mobile Environments (스마트 모바일 환경에서 의료정보 동적접근 시스템)

  • Jeong, Chang Won;Kim, Woo Hong;Yoon, Kwon Ha;Joo, Su Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • Recently, the environment of a hospital information system is a trend to combine various SMART technologies. Accordingly, various smart devices, such as a smart phone, Tablet PC is utilized in the medical information system. Also, these environments consist of various applications executing on heterogeneous sensors, devices, systems and networks. In these hospital information system environment, applying a security service by traditional access control method cause a problems. Most of the existing security system uses the access control list structure. It is only permitted access defined by an access control matrix such as client name, service object method name. The major problem with the static approach cannot quickly adapt to changed situations. Hence, we needs to new security mechanisms which provides more flexible and can be easily adapted to various environments with very different security requirements. In addition, for addressing the changing of service medical treatment of the patient, the researching is needed. In this paper, we suggest a dynamic approach to medical information systems in smart mobile environments. We focus on how to access medical information systems according to dynamic access control methods based on the existence of the hospital's information system environments. The physical environments consist of a mobile x-ray imaging devices, dedicated mobile/general smart devices, PACS, EMR server and authorization server. The software environment was developed based on the .Net Framework for synchronization and monitoring services based on mobile X-ray imaging equipment Windows7 OS. And dedicated a smart device application, we implemented a dynamic access services through JSP and Java SDK is based on the Android OS. PACS and mobile X-ray image devices in hospital, medical information between the dedicated smart devices are based on the DICOM medical image standard information. In addition, EMR information is based on H7. In order to providing dynamic access control service, we classify the context of the patients according to conditions of bio-information such as oxygen saturation, heart rate, BP and body temperature etc. It shows event trace diagrams which divided into two parts like general situation, emergency situation. And, we designed the dynamic approach of the medical care information by authentication method. The authentication Information are contained ID/PWD, the roles, position and working hours, emergency certification codes for emergency patients. General situations of dynamic access control method may have access to medical information by the value of the authentication information. In the case of an emergency, was to have access to medical information by an emergency code, without the authentication information. And, we constructed the medical information integration database scheme that is consist medical information, patient, medical staff and medical image information according to medical information standards.y Finally, we show the usefulness of the dynamic access application service based on the smart devices for execution results of the proposed system according to patient contexts such as general and emergency situation. Especially, the proposed systems are providing effective medical information services with smart devices in emergency situation by dynamic access control methods. As results, we expect the proposed systems to be useful for u-hospital information systems and services.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.