• Title/Summary/Keyword: 동시사용자

Search Result 2,178, Processing Time 0.025 seconds

A development of DS/CDMA MODEM architecture and its implementation (DS/CDMA 모뎀 구조와 ASIC Chip Set 개발)

  • 김제우;박종현;김석중;심복태;이홍직
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.6
    • /
    • pp.1210-1230
    • /
    • 1997
  • In this paper, we suggest an architecture of DS/CDMA tranceiver composed of one pilot channel used as reference and multiple traffic channels. The pilot channel-an unmodulated PN code-is used as the reference signal for synchronization of PN code and data demondulation. The coherent demodulation architecture is also exploited for the reverse link as well as for the forward link. Here are the characteristics of the suggested DS/CDMA system. First, we suggest an interlaced quadrature spreading(IQS) method. In this method, the PN coe for I-phase 1st channel is used for Q-phase 2nd channels and the PN code for Q-phase 1st channel is used for I-phase 2nd channel, and so on-which is quite different from the eisting spreading schemes of DS/CDMA systems, such as IS-95 digital CDMA cellular or W-CDMA for PCS. By doing IQS spreading, we can drastically reduce the zero crossing rate of the RF signals. Second, we introduce an adaptive threshold setting for the synchronization of PN code, an initial acquistion method that uses a single PN code generator and reduces the acquistion time by a half compared the existing ones, and exploit the state machines to reduce the reacquistion time Third, various kinds of functions, such as automatic frequency control(AFC), automatic level control(ALC), bit-error-rate(BER) estimator, and spectral shaping for reducing the adjacent channel interference, are introduced to improve the system performance. Fourth, we designed and implemented the DS/CDMA MODEM to be used for variable transmission rate applications-from 16Kbps to 1.024Mbps. We developed and confirmed the DS/CDMA MODEM architecture through mathematical analysis and various kind of simulations. The ASIC design was done using VHDL coding and synthesis. To cope with several different kinds of applications, we developed transmitter and receiver ASICs separately. While a single transmitter or receiver ASC contains three channels (one for the pilot and the others for the traffic channels), by combining several transmitter ASICs, we can expand the number of channels up to 64. The ASICs are now under use for implementing a line-of-sight (LOS) radio equipment.

  • PDF

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

The Effect of Mean Brightness and Contrast of Digital Image on Detection of Watermark Noise (워터 마크 잡음 탐지에 미치는 디지털 영상의 밝기와 대비의 효과)

  • Kham Keetaek;Moon Ho-Seok;Yoo Hun-Woo;Chung Chan-Sup
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.305-322
    • /
    • 2005
  • Watermarking is a widely employed method tn protecting copyright of a digital image, the owner's unique image is embedded into the original image. Strengthened level of watermark insertion would help enhance its resilience in the process of extraction even from various distortions of transformation on the image size or resolution. However, its level, at the same time, should be moderated enough not to reach human visibility. Finding a balance between these two is crucial in watermarking. For the algorithm for watermarking, the predefined strength of a watermark, computed from the physical difference between the original and embedded images, is applied to all images uniformal. The mean brightness or contrast of the surrounding images, other than the absolute brightness of an object, could affect human sensitivity for object detection. In the present study, we examined whether the detectability for watermark noise might be attired by image statistics: mean brightness and contrast of the image. As the first step to examine their effect, we made rune fundamental images with varied brightness and control of the original image. For each fundamental image, detectability for watermark noise was measured. The results showed that the strength ot watermark node for detection increased as tile brightness and contrast of the fundamental image were increased. We have fitted the data to a regression line which can be used to estimate the strength of watermark of a given image with a certain brightness and contrast. Although we need to take other required factors into consideration in directly applying this formula to actual watermarking algorithm, an adaptive watermarking algorithm could be built on this formula with image statistics, such as brightness and contrast.

  • PDF

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Dynamic Decision Making using Social Context based on Ontology (상황 온톨로지를 이용한 동적 의사결정시스템)

  • Kim, Hyun-Woo;Sohn, M.-Ye;Lee, Hyun-Jung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.43-61
    • /
    • 2011
  • In this research, we propose a dynamic decision making using social context based on ontology. Dynamic adaptation is adopted for the high qualified decision making, which is defined as creation of proper information using contexts depending on decision maker's state of affairs in ubiquitous computing environment. Thereby, the context for the dynamic adaptation is classified as a static, dynamic and social context. Static context contains personal explicit information like demographic data. Dynamic context like weather or traffic information is provided by external information service provider. Finally, social context implies much more implicit knowledge such as social relationship than the other two-type context, but it is not easy to extract any implied tacit knowledge as well as generalized rules from the information. So, it was not easy for the social context to apply into dynamic adaptation. In this light, we tried the social context into the dynamic adaptation to generate context-appropriate personalized information. It is necessary to build modeling methodology to adopt dynamic adaptation using the context. The proposed context modeling used ontology and cases which are best to represent tacit and unstructured knowledge such as social context. Case-based reasoning and constraint satisfaction problem is applied into the dynamic decision making system for the dynamic adaption. Case-based reasoning is used case to represent the context including social, dynamic and static and to extract personalized knowledge from the personalized case-base. Constraint satisfaction problem is used when the selected case through the case-based reasoning needs dynamic adaptation, since it is usual to adapt the selected case because context can be changed timely according to environment status. The case-base reasoning adopts problem context for effective representation of static, dynamic and social context, which use a case structure with index and solution and problem ontology of decision maker. The case is stored in case-base as a repository of a decision maker's personal experience and knowledge. The constraint satisfaction problem use solution ontology which is extracted from collective intelligence which is generalized from solutions of decision makers. The solution ontology is retrieved to find proper solution depending on the decision maker's context when it is necessary. At the same time, dynamic adaptation is applied to adapt the selected case using solution ontology. The decision making process is comprised of following steps. First, whenever the system aware new context, the system converses the context into problem context ontology with case structure. Any context is defined by a case with a formal knowledge representation structure. Thereby, social context as implicit knowledge is also represented a formal form like a case. In addition, for the context modeling, ontology is also adopted. Second, we select a proper case as a decision making solution from decision maker's personal case-base. We convince that the selected case should be the best case depending on context related to decision maker's current status as well as decision maker's requirements. However, it is possible to change the environment and context around the decision maker and it is necessary to adapt the selected case. Third, if the selected case is not available or the decision maker doesn't satisfy according to the newly arrived context, then constraint satisfaction problem and solution ontology is applied to derive new solution for the decision maker. The constraint satisfaction problem uses to the previously selected case to adopt and solution ontology. The verification of the proposed methodology is processed by searching a meeting place according to the decision maker's requirements and context, the extracted solution shows the satisfaction depending on meeting purpose.

Deconstructive reading of Makoto Shinkai's : Stories of things that cannot meet without their names (해체로 읽는 신카이 마코토의 <너의 이름은. 군(君)の명(名)は.> : 이름 없이는 서로 만날 수 없는 사물들에 대해)

  • Ahn, Yoon-kyung;Kim, Hyun-suk
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.75-99
    • /
    • 2018
  • Makoto Shinkai, an animated film maker in Japan, has been featured as a one-person production system and as a 'writer of light', but his 2016 release of "Your Name" was a departure from the elements that characterize his existing works. At the same time, by the combination of the traditional musubi(むすび) story, ending these, it was a big hit due to its rich narratives and attraction of open interpretation possibility. As it can be guessed from the title of this work, this work shows the encounter between the Japanese ancient language and the modern language in relation to the 'name', and presents the image that the role of the name(language) is repeatedly emphasized with various variations in events for the perfect 'encounter'. In this work, the interpretations of $Signifi\acute{e}$ for characters and objects are extended and reserved as a metaphorical role of the similarity, depending on the meaning of the subject which they touch. The relationship between words and objects analyzed through the structure of Signifiant and $Signifi\acute{e}$ is an epoch-making ideological discovery of modern times revealed through F. Saussure. Focusing on "the difference" between being this and that from the notion of Saussure, Derrida dismissed logocentrism, rationalism that fully obeyed the order of Logos. Likewise, dismissing the center, or dismissing the owner had emerged after the exclusive and closed principle of metaphysics in the west was dismissed. Derrida's definition of 'deconstruction' is a philosophical strategy that starts with the insight on the nature of language. 'Dissemination,' a metaphor that he used as a methodological concept to read texts acts as interpretation and practice (or play), but does not pursue an ultimate interpretation. His 'undecidability' does not start with infinity, but ends with infinity. The researcher testifies himself and identifies that we can't be an interpreter of the world because we, as a human are not the subject of language but a user. Derrida also interpreted the world of things composed of Signifiant and $Signifi\acute{e}$ as open texts. In this respect, this study aimed to read Makoto's works telling about the meeting of a thing and a thing with name as a guide, based on Derrida's frame of 'deconstruction' and 'dissemination.' This study intends to re-consider which relationship the Signifiant and $Signifi\acute{e}$ have with human beings who live in modern times, examine the relationship between words and objects presented in this work through Jacques Derrida's destruction and dissemination concepts, and recognize that we are merely a part of Signifiant and $Signifi\acute{e}$. Just as Taki and Mitsuha confirm the existence by asking each other, we are in the world of things, expecting musubi that a world of names calls me.

Manufacture of Daily Check Device and Efficiency Evaluation for Daily Q.A (일일 정도관리를 위한 Daily Check Device의 제작 및 효율성 평가)

  • Kim Chan-Yong;Jae Young-Wan;Park Heung-Deuk;Lee Jae-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.17 no.2
    • /
    • pp.105-111
    • /
    • 2005
  • Purpose : Daily Q.A is the important step which must be preceded in a radiation treatment. Specially, radiation output measurement and laser alignment, SSD indicator related to a patient set-up recurrence must be confirmed for a reasonable radiation treatment. Daily Q.A proceeds correctness and a prompt way, and needs an objective measurement basis. Manufacture of the device which can facilitate confirmation of output measurement and appliances check at one time was requested. Materials and Methods : Produced the phantom formal daily check device which can confirm a lot of appliances check (output measurement and laser alignment. field size, SSD indicator) with one time of set up at a time, and measurement observed a linear accelerator (4 machine) for four months and evaluated efficiency. Results : We were able to confirm an laser alignment, field size, SSD indicator check at the same time, and out put measurement was possible with the same set up, so daily Q.A time was reduced, and we were able to confirm an objective basis about each item measurement. As a result of having measured for four months, output measurement within ${\pm}2%$, and measured laser alignment, field size, SSD indicator in range within ${\pm}1mm$. Conclusion : We can enforce output measurement and appliances check conveniently, and time was reduced and was able to raise efficiency of business. We were able to bring a cost reduction by substitution expensive commercialized equipment. Further It is necessary to makes a product as strong and slight materials, and improve convenience of use.

  • PDF

A Study on the Governance of U.S. Global Positioning System (미국 글로벌위성항법시스템(GPS)의 거버넌스에 관한 연구 - 한국형위성항법시스템 거버넌스를 위한 제언 -)

  • Jung, Yung-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.35 no.3
    • /
    • pp.127-150
    • /
    • 2020
  • A Basic Plan for the Promotion of Space Development (hereinafter referred to as "basic plan"), which prescribes mid- and long-term policy objectives and basic direction-setting on space development every five years, is one of the matters to be deliberated by the National Space Committee. Confirmed February 2018 by the Committee, the 3rd Basic Plan has a unique matter, compared to the 2nd Basic Plan. It is to construct "Korean Positioning System(KPS)". Almost every country in the world including Korea has been relying on GPS. On the occasion of the shooting down of a Korean Air flight 007 by Soviet Russia, GPS Standard Positioning Service has been open to the world. Due to technical errors of GPS or conflict of interests between countries in international relations, however, the above Service can be interrupted at any time. Such cessation might bring extensive damage to the social, economic and security domains of every country. This is why some countries has been constructing an independent global or regional satellite navigation system: EU(Galileo), Russia(Glonass), India(NaVic), Japan(QZSS), and China(Beidou). So does South Korea. Once KPS is built, it is expected to make use of the system in various areas such as transportation, aviation, disaster, construction, defense, ocean, distribution, telecommunication, etc. For this, a pan-governmental governance is needed to be established. And this governance must be based on the law. Korea is richly experienced in developing and operating individually satellite itself, but it has little experience in the simultaneous development and operation of the satellites, ground, and users systems, such as KPS. Therefore we need to review overseas cases, in order to minimize trial and error. U.S. GPS is a classic example.

Shape Scheme and Size Discrete Optimum Design of Plane Steel Trusses Using Improved Genetic Algorithm (개선된 유전자 알고리즘을 이용한 평면 철골트러스의 형상계획 및 단면 이산화 최적설계)

  • Kim, Soo-Won;Yuh, Baeg-Youh;Park, Choon-Wok;Kang, Moon-Myung
    • Journal of Korean Association for Spatial Structures
    • /
    • v.4 no.2 s.12
    • /
    • pp.89-97
    • /
    • 2004
  • The objective of this study is the development of a scheme and discrete optimum design algorithm, which is based on the genetic algorithm. The algorithm can perform both scheme and size optimum designs of plane trusses. The developed Scheme genetic algorithm was implemented in a computer program. For the optimum design, the objective function is the weight of structures and the constraints are limits on loads and serviceability. The basic search method for the optimum design is the genetic algorithm. The algorithm is known to be very efficient for the discrete optimization. However, its application to the complicated structures has been limited because of the extreme time need for a number of structural analyses. This study solves the problem by introducing the size & scheme genetic algorithm operators into the genetic algorithm. The genetic process virtually takes no time. However, the evolutionary process requires a tremendous amount of time for a number of structural analyses. Therefore, the application of the genetic algorithm to the complicated structures is extremely difficult, if not impossible. The scheme genetic algorithm operators was introduced to overcome the problem and to complement the evolutionary process. It is very efficient in the approximate analyses and scheme and size optimization of plane trusses structures and considerably reduces structural analysis time. Scheme and size discrete optimum combined into the genetic algorithm is what makes the practical discrete optimum design of plane fusses structures possible. The efficiency and validity of the developed discrete optimum design algorithm was verified by applying the algorithm to various optimum design examples: plane pratt, howe and warren truss.

  • PDF

The Study of Nano-vesicle Coated Powder (나노베시클 표면처리 분체의 개발연구)

  • Son, Hong-Ha;Kwak, Taek-Jong;Kim, Kyung-Seob;Lee, Sang-Min;Lee, Cheon-Koo
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.32 no.1 s.55
    • /
    • pp.45-51
    • /
    • 2006
  • In the field of makeup cosmetics, especially, powder-based foundations such as two-way cake, pact and face powder, the quality of which is known to be strongly influenced by the properties of powder, surface treatment technology is widely used as a method to improve the various characteristics of powder texture, wear properties, dispersion ability and so on. The two-way cake or pressed-powder foundation is one of the familiar makeup products in Asian market for deep covering and finishing purpose. In spite of the relent progress in surface modification method such as composition of powders with different characteristics and application of a diversity of coating ingredient (metal soap, amino acid, silicone and fluorine), this product possess a technical difficulty to enhance both of the adhesion power and spreadability on the skin in addition to potential claim of consumer about heavy or thick feeling. This article is covering the preparation and coating method of nano-vesicle that mimic the double-layered lipid lamellar structure existing between the corneocytes of the stratum corneum in the skin for the purpose of improving both of two important physical characteristic of two-way cake, spreadability and adhering force to skin, and obtining better affinity to skin. Nano-vesicle was prepared using the high-pressure emulsifying process of lecithin, pseudo ceramide, butylene glycol and tocopheryl acetate. This nano-sized emulsion was added to powder-dispersed aqueous phase together with bivalent metal salt solution and then the filtering and drying procedure was followed to yield the nano-vesicle coated powder. The amount of nano-vesicle coated on the powder was able to regulated by the concentration of metal salt and this novel powder showed the lower friction coefficient, more uniform condition of application and higher adhesive powder comparing with the alkyl silane treated powder from the test result of spreadability and wear properties using friction meter and air jet method. Two-wav cake containing newly developed coated powder with nano-vesicle showed the similar advantages in the frictional and adhesive characteristics.