• Title/Summary/Keyword: Computer Communication

Search Result 12,941, Processing Time 0.04 seconds

A study on the implementation of Medical Telemetry systems using wireless public data network (무선공중망을 이용한 의료 정보 데이터 원격 모니터링 시스템에 관한 연구)

  • 이택규;김영길
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.10a
    • /
    • pp.278-283
    • /
    • 2000
  • As information communication technology developed we could check our blood pressure, pulsation electrocardiogram, SpO2 and blood test easily at home. To check our health at ordinary times is able though interlocking the house medical instrument with the wireless public data network This service will help the inconvenience to visit the hospital everytime and will save the individual's time and cost. In each house an organism data which is detected from the human body will be transmitted to the distance hospital and will be essentially applied through wireless public data network The medical information transmit system is utilized by wireless close range network It would transmit the obtained organism signal wirelessly from the personal device to the main center system in the hospital. Remote telemetry system is embodied by utilizing wireless media access protocol. The protocol is embodied by grafting CSMA/CA(Carrier Sense Multiple Access with Collision Avoidance) protocol falling mode which is standards from IEEE 802.11. Among the house care telemetry system which could measure blood pressure, pulsation, electrocardiogram, SpO2 the study embodies the ECC(electrocardiograph) measure part. It within the ECC function into the movable device and add 900㎒ band wireless public data interface. Then the aged, the patients even anyone in the house could obtain ECG and keep, record the data. It would be essential to control those who had a health-examination heart diseases or more complicated heart diseases and to observe the latent heart disease patient continuously. To embody the medical information transmit system which is based on wireless network. It would transmit the ECG data among the organism signal data which would be utilized by wireless network modem and NCL(Native Control Language) protocol to contact through wireless network Through the SCR(Standard Context Routing) protocol in the network it will be connected to the wired host computer. The computer will check the recorded individual information and the obtained ECC data then send the correspond examination to the movable device. The study suggests the medical transmit system model utilized by the wireless public data network.

  • PDF

A Study of Textured Image Segmentation using Phase Information (페이즈 정보를 이용한 텍스처 영상 분할 연구)

  • Oh, Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.249-256
    • /
    • 2011
  • Finding a new set of features representing textured images is one of the most important studies in textured image analysis. This is because it is impossible to construct a perfect set of features representing every textured image, and it is inevitable to choose some relevant features which are efficient to on-going image processing jobs. This paper intends to find relevant features which are efficient to textured image segmentation. In this regards, this paper presents a different method for the segmentation of textured images based on the Gabor filter. Gabor filter is known to be a very efficient and effective tool which represents human visual system for texture analysis. Filtering a real-valued input image by the Gabor filter results in complex-valued output data defined in the spatial frequency domain. This complex value, as usual, gives the module and the phase. This paper focused its attention on the phase information, rather than the module information. In fact, the module information is considered very useful at region analysis in texture, while the phase information was considered almost of no use. But this paper shows that the phase information can also be fully useful and effective at region analysis in texture, once a good method introduced. We now propose "phase derivated method", which is an efficient and effective way to compute the useful phase information directly from the filtered value. This new method reduces effectively computing burden and widen applicable textured images.

Design and Implementation of Medical Information System using QR Code (QR 코드를 이용한 의료정보 시스템 설계 및 구현)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.109-115
    • /
    • 2015
  • The new medical device technologies for bio-signal information and medical information which developed in various forms have been increasing. Information gathering techniques and the increasing of the bio-signal information device are being used as the main information of the medical service in everyday life. Hence, there is increasing in utilization of the various bio-signals, but it has a problem that does not account for security reasons. Furthermore, the medical image information and bio-signal of the patient in medical field is generated by the individual device, that make the situation cannot be managed and integrated. In order to solve that problem, in this paper we integrated the QR code signal associated with the medial image information including the finding of the doctor and the bio-signal information. bio-signal. System implementation environment for medical imaging devices and bio-signal acquisition was configured through bio-signal measurement, smart device and PC. For the ROI extraction of bio-signal and the receiving of image information that transfer from the medical equipment or bio-signal measurement, .NET Framework was used to operate the QR server module on Window Server 2008 operating system. The main function of the QR server module is to parse the DICOM file generated from the medical imaging device and extract the identified ROI information to store and manage in the database. Additionally, EMR, patient health information such as OCS, extracted ROI information needed for basic information and emergency situation is managed by QR code. QR code and ROI management and the bio-signal information file also store and manage depending on the size of receiving the bio-singnal information case with a PID (patient identification) to be used by the bio-signal device. If the receiving of information is not less than the maximum size to be converted into a QR code, the QR code and the URL information can access the bio-signal information through the server. Likewise, .Net Framework is installed to provide the information in the form of the QR code, so the client can check and find the relevant information through PC and android-based smart device. Finally, the existing medical imaging information, bio-signal information and the health information of the patient are integrated over the result of executing the application service in order to provide a medical information service which is suitable in medical field.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Through SNS and freedom of election Publicized criminal misrepresentation (SNS를 통한 선거의 자유와 허위사실공표죄)

  • Lee, Ju-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.2
    • /
    • pp.149-156
    • /
    • 2013
  • In this paper, the Constitutional Court's ruling through the SNS was virtually guaranteed the freedom of election campaign through, though, still a large portion of campaign restrictions on public election law provisions exist to this forward in the election is likely to cause a lot of legal problems. In this paper, the Constitutional Court's ruling through the SNS was virtually guaranteed the freedom of election campaign through, though, still a large portion of campaign restrictions on public election law provisions exist to this forward in the election is likely to cause a lot of legal problems. Moreover, in the mean time the campaign and which in the course of the election campaign through the SNS, the infinite potential of the growing point than any point spread from the SNS and freedom of election campaign through public election law with regard to the limitation of the diffusion of false facts, awards, a number of problems are likely to occur. You've been in this business and disseminate false guilt disparage precandidacy for true-false, as well. He should be able to reach a specific goal you want to defeat through the dissemination of information which is specified as a crime for this strictly for the fact that disseminate false, rather than to interpret it is the judgment of the Court in that judgment against have been made. Therefore, this strict interpretation of the law and the need to revise or delete before I would like to discuss about. The legislation would repeal the cull of Ron sang first of all point out the issue through analytics. First, the purpose of the data protection Act provides limited interpretation to fit in this world of sin. Secondly, this sin is committed for the purpose of prevention, since the purpose of the objective in this case of sin and the need to interpret strictly. Why I am the Internet space in the case of so-called tweets from followers, this means in some cases done without a lot of the stars because of this, there will be a limit to the punishment of sin, this is obvious. And, in the long-awaited Constitutional Court ensures the freedom of election campaign through SNS and free election in the country, even in the limited sense interpretation opens the chapter of communication is needed. This ensured the freedom of expression will be highly this is a mature civil society that will be imperative.

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

A Page Replacement Scheme Based on Recency and Frequency (최근성과 참조 횟수에 기반한 페이지 교체 기법)

  • Lee, Seung-Hoon;Lee, Jong-Woo;Cho, Seong-Je
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.469-478
    • /
    • 2001
  • In the virtual memory system, page replacement policy exerts a great influence on the performance of demand paging. There are LRU(Least Recently Used) and LFU (Least Frequently Used) as the typical replacement policies. The LRU policy performs effectively in many cases and adapts well to the changing workloads compared to other policies. It however cannot distinguish well between frequently and infrequently referenced pages. The LFU policy requires that the page with the smallest reference count be replaced. Though it considers all the references in the past, it cannot discriminate between references that occurred far back in the past and the more recent ones. Thus, it cannot adapt well to the changing workload. In this paper, we first analyze memory reference patterns of eight applications. The patterns show that the recently referenced pages or the frequently referenced pages are accessed continuously as the case may be. So it is rather hard to optimize page replacement scheme by using just one of the LRU or LFU policy. This paper makes an attempt to combine the advantages of the two policies and proposes a new page replacement policy. In the proposed policy, paging list is divided into two lists (LRU and LFU lists). By keeping the two lists in recency and reference frequency order respectively, we try to restrain the highly referenced pages in the past from being replaced by the LRU policy. Results from trace-driven simulations show that there exists points on the spectrum at which the proposed policy performs better than the previously known policies for the workloads we considered. Especially, we can see that our policy outperforms the existing ones in such applications that have reference patterns of re-accessing the frequently referenced pages in the past after some time.

  • PDF

The Effects of Virtual Reality Advertisement on Consumer's Intention to Purchase: Focused on Rational and Emotional Responses (가상현실(Virtual Reality) 광고가 소비자 구매의도에 미치는 영향: 이성적인 반응과 감성적인 반응의 통합)

  • Cha, Jae-Yol;Im, Kun-Shin
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.101-124
    • /
    • 2009
  • According to Wikipedia, virtual reality (VR) is defined as a technology that allows a user to interact with a computer-simulated environment. Due to a rapid growth in information technology (IT), the cost of virtual reality has been decreasing while the utility of virtual reality advertisements has dramatically increased. Nevertheless, only a few studies have investigated the effects of virtual reality advertisement on consumer behaviors. Therefore, the objective of this study is to empirically examine the effects of virtual reality advertisement. Compared to traditional online advertisements, virtual reality advertisement enables consumers to experience products realistically over the Internet by providing high media richness, interactivity, and telepresence (Suh and Lee, 2005). Advertisements with high media richness facilitate consumers' understanding of advertised products by providing them with a large amount and a high variety of information on the products. Interactivity also provides consumers with a high level of control over the computer-simulated environment in terms of their abilities to adjust the information according to their individual interests and concerns and to be active rather than passive in their engagement with the information (Pimentel and Teixera, 1994). Through high media richness and interactivity, virtual reality advertisements can generate compelling feelings of "telepresence" (Suh and Lee, 2005). Telepresence is a sense of being there in an environment by means of a communication medium (Steuer, 1992). Virtual reality advertisements enable consumers to create a perceptual illusion of being present and highly engaged in a simulated environment, while they are in reality physically present in another place (Biocca, 1997). Based on the characteristics of virtual reality advertisements, a research model has been proposed to explain consumer responses to the virtual reality advertisements. The proposed model includes two dimensions of consumer responses. One dimension is consumers' rational response, which is based on the Information Processing Theory. Based on the Information Processing Theory, product knowledge and perceived risk are selected as antecedents of intention to purchase. The other dimension is emotional response of consumers, which is based on the Attitude-Structure Theory. Based on the Attitude-Structure Theory, arousal, flow, and positive affect are selected as antecedents of intention to purchase. Because it has been criticized to have investigated only one of the two dimensions of consumer response in prior studies, our research model has been built so as to incorporate both dimensions. Based on the Attitude-Structure Theory, we hypothesized the path of consumers' emotional responses to a virtual reality advertisement: (H1) Arousal by the virtual reality advertisement increases flow; (H2) Flow increases positive affect; and (H3) Positive affect increases intension to purchase. In addition, we hypothesized the path of consumers' rational responses to the virtual reality advertisement based on the Information Processing Theory: (H4) Increased product knowledge through the virtual reality advertisement decreases perceived risk; and (H5) Perceived risk decreases intension to purchase. Based on literature of flow, we additionally hypothesized the relationship between flow and product knowledge: (H6) Flow increases product knowledge. To test the hypotheses, we conducted a free simulation experiment [Fromkin and Streufert, 1976] with 300 people. Subjects were asked to use the virtual reality advertisement of a cellular phone on the Internet and then answer questions about the variables. To check whether subjects fully experienced the virtual reality advertisement, they were asked to answer a quiz about the virtual reality advertisement itself. Responses of 26 subjects were dropped because of their incomplete answers. Responses of 274 subjects were used to test the hypotheses. It was found that all of six hypotheses are accepted. In addition, we found that consumers' emotional response has stronger impact on their intention to purchase than their rational response does. This study sheds much light into practical implications for both IS researchers and managers. First of all, while most of previous research has analyzed only one of the customers' rational and emotional responses, we theoretically incorporated and empirically examined both of the two sides. Second, we empirically showed that mediators such as arousal, flow, positive affect, product knowledge, and perceived risk play an important role between virtual reality advertisement and customer's intention to purchase. In addition, the findings of this study can provide a basis of practical strategies for managers. It was found that consumers' emotional response is stronger than their rational response. This result indicates that advertisements using virtual reality should focus on the emotional side, and that virtual reality can be served as an appropriate advertisement tool for fancy products that require their online advertisements to give an impetus to customers' emotion. Finally, even if this study examined the effects of virtual reality advertisement of cellular phone, its findings could be applied to other products that are suited for virtual experience. However, this research has some limitations. We were unable to control different kinds of consumers and different attributes of products on consumers' intention to purchase. It is, therefore, deemed important for future research to control the consumer and product types for more reliable results. In addition to the consumer and product attributes, other variables could affect consumers' intention to purchase. Thus, the future research needs to find ways t control other variables.

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.