• Title/Summary/Keyword: 흐름 정보

Search Result 3,313, Processing Time 0.031 seconds

Evaluation on the Immunization Module of Non-chart System in Private Clinic for Development of Internet Information System of National Immunization Programme m Korea (국가 예방접종 인터넷정보시스템 개발을 위한 의원정보시스템의 예방접종 모듈 평가연구)

  • Lee, Moo-Sik;Lee, Kun-Sei;Lee, Seok-Gu;Shin, Eui-Chul;Kim, Keon-Yeop;Na, Bak-Ju;Hong, Jee-Young;Kim, Yun-Jeong;Park, Sook-Kyung;Kim, Bo-Kyung;Kwon, Yun-Hyung;Kim, Young-Taek
    • Journal of agricultural medicine and community health
    • /
    • v.29 no.1
    • /
    • pp.65-75
    • /
    • 2004
  • Objectives: Immunizations have been one of the most effective measures preventing from infectious diseases. It is quite important national infectious disease prevention policy to keep the immunizations rate high and monitor the immunizations rate continuously. To do this, Korean CDC introduced the National Immunization Registry Program(NIRP) which has been implementing since 2000 at the Public Health Centers(PHC). The National Immunization Registry Program will be near completed after sharing, connecting and transfering vaccination data between public and private sector. The aims of this study was to evaluate the immunization module of non-chart system in private clinic with health information system of public health center(made by POSDATA Co., LTD) and immunization registry program(made by BIT Computer Co., LTD). Methods: The analysis and survey were done by specialists in medical, health field, and health information fields from 2001. November to 2002. January. We made the analysis and recommendation about the immunization module of non-chart system in private clinic. Results and Conclusions: To make improvement on immunization module, the system will be revised on various function like receipt and registration, preliminary medical examination, reference and inquiry, registration of vaccine, print-out various sheet, function of transfer vaccination data, issue function of vaccination certification, function of reminder and recall, function of statistical calculation, and management of vaccine stock. There are needs of an accurate assessment of current immunization module on each private non-chart system. And further studies will be necessary to make it an accurate system under changing health policy related national immunization program. We hope that the result of this study may contribute to establish the National Immunization Registry Program.

  • PDF

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

Geoscientific land management planning in salt-affected areas* (염기화된 지역에서의 지구과학적 토지 관리 계획)

  • Abbott, Simon;Chadwick, David;Street, Greg
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.1
    • /
    • pp.98-109
    • /
    • 2007
  • Over the last twenty years, farmers in Western Australia have begun to change land management practices to minimise the effects of salinity to agricultural land. A farm plan is often used as a guide to implement changes. Most plans are based on minimal data and an understanding of only surface water flow. Thus farm plans do not effectively address the processes that lead to land salinisation. A project at Broomehill in the south-west of Western Australia applied an approach using a large suite of geospatial data that measured surface and subsurface characteristics of the regolith. In addition, other data were acquired, such as information about the climate and the agricultural history. Fundamental to the approach was the collection of airborne geophysical data over the study area. This included radiometric data reflecting soils, magnetic data reflecting bedrock geology, and SALTMAP electromagnetic data reflecting regolith thickness and conductivity. When interpreted, these datasets added paddock-scale information of geology and hydrogeology to the other datasets, in order to make on-farm and in-paddock decisions relating directly to the mechanisms driving the salinising process. The location and design of surface-water management structures such as grade banks and seepage interceptor banks was significantly influenced by the information derived from the airborne geophysical data. To evaluate the effectiveness ofthis planning., one whole-farm plan has been monitored by the Department of Agriculture and the farmer since 1996. The implemented plan shows a positive cost-benefit ratio, and the farm is now in the top 5% of farms in its regional productivity benchmarking group. The main influence of the airborne geophysical data on the farm plan was on the location of earthworks and revegetation proposals. There had to be a hydrological or hydrogeological justification, based on the site-specific data, for any infrastructure proposal. This approach reduced the spatial density of proposed works compared to other farm plans not guided by site-specific hydrogeological information.

A Study on the Applicability of Social Security Platform to Smart City (사회보장플랫폼과 스마트시티에의 적용가능성에 관한 연구)

  • Jang, Bong-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.321-335
    • /
    • 2020
  • Given that with the development of the 4th industry, interest and desire for smart cities are gradually increasing and related technologies are developed as a way to strengthen urban competitiveness by utilizing big data, information and communication technology, IoT, M2M, and AI, the purpose of this study is to find out how to achieve this goal on the premise of the idea of smart well fair city. In other words, the purpose is to devise a smart well-fair city in the care area, such as health care, medical care, and welfare, and see if it is feasible. With this recognition, the paper aimed to review the concept and scope of smart city, the discussions that have been made so far and the issues or limitations on its connection to social security and social welfare, and based on it, come up with the concept of welfare city. As a method of realizing the smart welfare city, the paper reviewed characteristics and features of a social security platform as well as the applicability of smart city, especially care services. Furthermore, the paper developed discussions on the standardization of the city in terms of political and institutional improvements, utilization of personal information and public data as well as ways of institutional improvement centering on social security information system. This paper highlights the importance of implementing the digitally based community care and smart welfare city that our society is seeking to achieve. With regard to the social security platform based on behavioral design and the 7 principles(6W1H method), the present paper has the limitation of dealing only with smart cities in the fields of healthcare, medicine, and welfare. Therefore, further studies are needed to investigate the effects of smart cities in other fields and to consider the application and utilization of technologies in various aspects and the corresponding impact on our society. It is expected that this paper will suggest the future course and vision not only for smart cities but also for the social security and welfare system and thereby make some contribution to improving the quality of people's lives through the requisite adjustments made in each relevant field.

Analysis of Traffic Accidents Injury Severity in Seoul using Decision Trees and Spatiotemporal Data Visualization (의사결정나무와 시공간 시각화를 통한 서울시 교통사고 심각도 요인 분석)

  • Kang, Youngok;Son, Serin;Cho, Nahye
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.233-254
    • /
    • 2017
  • The purpose of this study is to analyze the main factors influencing the severity of traffic accidents and to visualize spatiotemporal characteristics of traffic accidents in Seoul. To do this, we collected the traffic accident data that occurred in Seoul for four years from 2012 to 2015, and classified as slight, serious, and death traffic accidents according to the severity of traffic accidents. The analysis of spatiotemporal characteristics of traffic accidents was performed by kernel density analysis, hotspot analysis, space time cube analysis, and Emerging HotSpot Analysis. The factors affecting the severity of traffic accidents were analyzed using decision tree model. The results show that traffic accidents in Seoul are more frequent in suburbs than in central areas. Especially, traffic accidents concentrated in some commercial and entertainment areas in Seocho and Gangnam, and the traffic accidents were more and more intense over time. In the case of death traffic accidents, there were statistically significant hotspot areas in Yeongdeungpo-gu, Guro-gu, Jongno-gu, Jung-gu and Seongbuk. However, hotspots of death traffic accidents by time zone resulted in different patterns. In terms of traffic accident severity, the type of accident is the most important factor. The type of the road, the type of the vehicle, the time of the traffic accident, and the type of the violation of the regulations were ranked in order of importance. Regarding decision rules that cause serious traffic accidents, in case of van or truck, there is a high probability that a serious traffic accident will occur at a place where the width of the road is wide and the vehicle speed is high. In case of bicycle, car, motorcycle or the others there is a high probability that a serious traffic accident will occur under the same circumstances in the dawn time.

The Design and Evaluation of a Diagonally Splitted Column to Improve Text Readability on a Small Screen (소형 스크린 상에서의 텍스트 가독성 향상을 위한 대각분할 칼럼 디자인과 평가)

  • Kim Yeon-Ji;Lee Woo-Hun
    • Archives of design research
    • /
    • v.19 no.4 s.66
    • /
    • pp.51-60
    • /
    • 2006
  • Nowadays, reading text from screens is prevailing in everyday life. The advent of mobile information devices such as a cellular phone, PDA, and e-book reader facilitates us to enjoy various text-based contents any time and anywhere. Most studies comparing screen and paper readability show that screens are less readable than paper. Furthermore, the decrease of line length and number of lines that can be displayed on the screen of mobile information devices deteriorate text readability. This study investigated parameters affecting text readability on small screens and designed a new text layout to improve readability. We suggested a diagonally splitted layout of rectangular column, which is supposed to facilitate eye movement to trace text flow with ease. The experiment comparing readability between a traditional rectangular column and a diagonally splitted column was conducted. The result of experiment revealed that there is no significant difference between the two text layouts in terms of subjective satisfaction of reading task and a level of comprehension. However, in the screen size of $4000mm^2\;and\;8000mm^2$, reading speed was increased 18.9% and 34.0% respectively from a traditional rectangular column to a diagonally splitted column. We conducted a consecutive experiment to scrutinize the cause that improved the performance in readability task remarkably. The readability of text in a traditional rectangular column was compared with a left triangular column and a right triangular column in the condition of $4000mm^2/3:1$ ratio screen. The performance measurements revealed that participants read 21.1% and 67.6% faster respectively with the left triangular column and right triangular column than with the rectangular column. In consequence, the improvement of readability in the diagonally splitted column was attributed mainly to the increase of reading speed in the right triangular column. This research verified that the diagonally splitted column improve text readability on a small screen and this result is expected to make a contribution to designing an efficient text layout for mobile information devices

  • PDF

Scheduling Algorithms and Queueing Response Time Analysis of the UNIX Operating System (UNIX 운영체제에서의 스케줄링 법칙과 큐잉응답 시간 분석)

  • Im, Jong-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.3
    • /
    • pp.367-379
    • /
    • 1994
  • This paper describes scheduling algorithms of the UNIX operating system and shows an analytical approach to approximate the average conditional response time for a process in the UNIX operating system. The average conditional response time is the average time between the submittal of a process requiring a certain amount of the CPU time and the completion of the process. The process scheduling algorithms in thr UNIX system are based on the priority service disciplines. That is, the behavior of a process is governed by the UNIX process schuduling algorithms that (ⅰ) the time-shared computer usage is obtained by allotting each request a quantum until it completes its required CPU time, (ⅱ) the nonpreemptive switching in system mode and the preemptive switching in user mode are applied to determine the quantum, (ⅲ) the first-come-first-serve discipline is applied within the same priority level, and (ⅳ) after completing an allotted quantum the process is placed at the end of either the runnable queue corresponding to its priority or the disk queue where it sleeps. These process scheduling algorithms create the round-robin effect in user mode. Using the round-robin effect and the preemptive switching, we approximate a process delay in user mode. Using the nonpreemptive switching, we approximate a process delay in system mode. We also consider a process delay due to the disk input and output operations. The average conditional response time is then obtained by approximating the total process delay. The results show an excellent response time for the processes requiring system time at the expense of the processes requiring user time.

  • PDF

A Study on Documentation Strategy for Archiving Locality (지역 아카이빙을 위한 기록화방안 연구)

  • Kwon, Soon-Myung;Lee, Seung-Hwi
    • The Korean Journal of Archival Studies
    • /
    • no.21
    • /
    • pp.41-84
    • /
    • 2009
  • Lots of cultures, memories, histories of the local life have disappeared. Some sectors of universities and religion have keep their records in manuscript archive only. On the other hand records of public sectors were at least able to be managed by the records management law. Citizen's groups and academic bounds were also roles to get public records strong. However can we just describe whole body with only public records? As records management law a record of private sector which has value of preserving can be managed under national protection. Yet establishment of local archive is not obligate. Only stressing on public records is like what dictatorial government acted in past years. It is what we ignore diversity and request of community. We need to move our view that we have focused on public and central sectors to private and local sectors. Local records management based on locality could help to complete the entire puzzle. The way complete the puzzle is various and wide spheres including from cultural space to being extinct village. Locality is defined as the property in certain area or distinctiveness of locals. Establishing production strategies is as important as collecting records produced over the past years for local archiving. Local archiving has to be regionally conducted in phase. Moreover common wealth and recognition of communities are reflected in the acquisition process. In next to archiving local organizations and private records according to collection policy, methodology on local archiving needs for archive management and use in various public and private fields. This methodology could be possible by building a local archive networking tool. It is true that Local archiving is not familiar and clear yet. If we can turn the effort for public records we have made to endeavor for private sectors, we might expect big fruits in private sectors. We easily emphasis on globalization or internationalization, our daily lives start on our villages. Setting aside our small communities, such a puzzle of the whole would never be completed. This is good time to begin finding lost puzzle for future. The key that can find lost puzzles be held in archiving localities.