• Title/Summary/Keyword: 양자 정보

Search Result 1,029, Processing Time 0.025 seconds

Improvement of Personal Information Protection Laws in the era of the 4th industrial revolution (4차 산업혁명 시대의 개인정보보호법제 개선방안)

  • Choi, Kyoung-jin
    • Journal of Legislation Research
    • /
    • no.53
    • /
    • pp.177-211
    • /
    • 2017
  • In the course of the emergence and development of new ICT technologies and services such as Big Data, Internet of Things and Artificial Intelligence, the future will change by these new innovations in the Fourth Industrial Revolution. The future of this fourth industrial revolution will change and our future will be data-based society or economy. Since there is personal information at the center of it, the development of the economy through the utilization of personal information will depend on how to make the personal information protection laws. In Korea, which is trying to lead the 4th industrial revolution, it is a legal interest that can not give up the use of personal information, and also it is an important legal benefit that can not give up the personal interests of individuals who want to protect from personal information. Therefore, it is necessary to change the law on personal information protection in a rational way to harmonize the two. In this regard, this article discusses the problems of duplication and incompatibility of the personal information protection law, the scope of application of the personal information protection law and the uncertainty of the judgment standard, the lack of flexibility responding to the demand for the use of reasonable personal information, And there is a problem of reverse discrimination against domestic area compared to the regulated blind spot in foreign countries. In order to solve these problems and to improve the legislation of personal information protection in the era of the fourth industrial revolution, we proposed to consider both personal information protection and safe use by improving the purpose and regulation direction of the personal information protection law. The balance and harmony between the systematical maintenance of the personal information protection legislation and laws and regulations were also set as important directions. It is pointed out that the establishment of rational judgment criteria and the legislative review to clarify it are necessary for the constantly controversial personal information definition regulation and the method of allowing anonymization information as the intermediate domain. In addition to the legislative review for the legitimate and non-invasive use of personal information, there is a need to improve the collective consent system for collecting personal information to differentiate the subject and to improve the legislation to ensure the effectiveness of the regulation on the movement of personal information between countries. In addition to the issues discussed in this article, there may be a number of challenges, but overall, the protection and use of personal information should be harmonized while maintaining the direction indicated above.

Principles and Current Trends of Neural Decoding (뉴럴 디코딩의 원리와 최신 연구 동향 소개)

  • Kim, Kwangsoo;Ahn, Jungryul;Cha, Seongkwang;Koo, Kyo-in;Goo, Yong Sook
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.342-351
    • /
    • 2017
  • The neural decoding is a procedure that uses spike trains fired by neurons to estimate features of original stimulus. This is a fundamental step for understanding how neurons talk each other and, ultimately, how brains manage information. In this paper, the strategies of neural decoding are classified into three methodologies: rate decoding, temporal decoding, and population decoding, which are explained. Rate decoding is the firstly used and simplest decoding method in which the stimulus is reconstructed from the numbers of the spike at given time (e. g. spike rates). Since spike number is a discrete number, the spike rate itself is often not continuous and quantized, therefore if the stimulus is not static and simple, rate decoding may not provide good estimation for stimulus. Temporal decoding is the decoding method in which stimulus is reconstructed from the timing information when the spike fires. It can be useful even for rapidly changing stimulus, and our sensory system is believed to have temporal rather than rate decoding strategy. Since the use of large numbers of neurons is one of the operating principles of most nervous systems, population decoding has advantages such as reduction of uncertainty due to neuronal variability and the ability to represent a stimulus attributes simultaneously. Here, in this paper, three different decoding methods are introduced, how the information theory can be used in the neural decoding area is also given, and at the last machinelearning based algorithms for neural decoding are introduced.

The Approach of Sociology of Law on Counter-Terrorism using Internet (인터넷을 활용한 테러 대응의 법사회학적 접근 - 예방 홍보 관리방안을 중심으로 -)

  • Park, Yong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.225-234
    • /
    • 2007
  • This research is based on the purpose of awaking the importance to publicity of terrorism prevention in the environment of neo-terrorism and presenting the direction of effective publicity activities of terrorism prevention in the counter-terrorism system. Government publicity through public media plays a significant role in promoting people's participation and improving the awareness. So, to strengthen the terrorism prevention in environmental changes of terror occurrence, active method available for people must be found as publicity method of terrorism prevention suitable for high information society. For this method, this research argued first about relationship between police organization and public as counter-terrorism system and about effective publicity methods of terrorism prevention through active erection of these relationships. This research suggested the operation method through introduction of e-CRM and etc, with ultimate purpose about maximizing the publicity effect of terrorism prevention by using as the advantage of internet these days as possible. And needs of information service activities and other administration strategies of publicity of terrorism prevention are suggested through enlarging the distribution scope of governmental counter-terrorism information materials by strengthening the national publicity activities and using media.

  • PDF

A Study on the Characteristics by Keyword Types in the Intellectual Structure Analysis Based on Co-word Analysis: Focusing on Overseas Open Access Field (동시출현단어 분석에 기초한 지적구조 분석에서 키워드 유형별 특성에 관한 연구 - 국외 오픈액세스 분야를 중심으로 -)

  • Kim, Pan Jun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.3
    • /
    • pp.103-129
    • /
    • 2021
  • This study examined the characteristics of two keyword types expressing the topics in the intellectual structure analysis based on the co-word analysis, focused on overseas open access field. Specifically, the keyword set extracted from the LISTA database in the field of library and information science was divided into two types (controlled keywords and uncontrolled keywords), and the results of performing intellectual structure analysis based on co-word analysis were compared. As a result, the two keyword types showed significant differences by keyword sets, research maps and influences, and periods. Therefore, in intellectual structure analysis based on co-word analysis, the characteristics of each keyword type should be considered according to the purpose of the study. In other words, it would be more appropriate to use controlled keywords for the purpose of examining the overall research trend in a specific field from the perspective of the entire academic field, and to use uncontrolled keywords for the purpose of identifying detailed trends by research area from the perspective of the specific field. In addition, for a comprehensive intellectual structure analysis that reflects both viewpoints, it can be said that it is most desirable to compare and analyze the results of using controlled keywords and uncontrolled keywords individually.

3D Non-local Means(NLM) Algorithm Based on Stochastic Distance for Low-dose X-ray Fluoroscopy Denoising (저선량 X-ray 영상의 잡음 제거를 위한 확률 거리 기반 3차원 비지역적 평균 알고리즘)

  • Lee, Min Seok;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.61-67
    • /
    • 2017
  • Low-dose X-ray fluoroscopic image sequences to avoid radiation exposure risk are contaminated by quantum noise. To restore these noisy sequences, we propose a 3D nonlocal means (NLM) filter based on stochastic distancesed can be applied to the denoising of X-ray fluoroscopic image sequences. The stochastic distance is obtained within motion-compensated noise filtering support to remove the Poisson noise. In this paper, motion-adaptive weight which reflected the frame similarity is proposed to restore the noisy sequences without motion artifact. Experimental results including comparisons with conventional algorithms for real X-ray fluoroscopic image sequences show the proposed algorithm has a good performance in both visual and quantitative criteria.

A Generation of ROI Mask and An Automatic Extraction of ROI Using Edge Distribution of JPEG2000 Image (JPEG2000 이미지의 에지 분포를 이용한 ROI 마스크 생성과 자동 관심영역 추출)

  • Seo, Yeong Geon;Kim, Hee Min;Kim, Sang Bok
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.583-593
    • /
    • 2015
  • Today, caused by the growth of computer and communication technology, multimedia, especially image data are being used in different application divisions. JPEG2000 that is widely used these days provides a Region-of-Interest(ROI) technique. The extraction of ROI has to be rapidly executed and automatically extracted in a huge amount of image because of being seen preferentially to the users. For this purpose, this paper proposes a method about preferential processing and automatic extraction of ROI using the distribution of edge in the code block of JPEG2000. The steps are the extracting edges, automatical extracting of a practical ROI, grouping the ROI using the ROI blocks, generating the mask blocks and then quantization, ROI coding which is the preferential processing, and EBCOT. In this paper, to show usefulness of the method, we experiment its performance using other methods, and executes the quality evaluation with PSNR between the images not coding an ROI and coding it.

Video retrieval method using non-parametric based motion classification (비-파라미터 기반의 움직임 분류를 통한 비디오 검색 기법)

  • Kim Nac-Woo;Choi Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.1-11
    • /
    • 2006
  • In this paper, we propose the novel video retrieval algorithm using non-parametric based motion classification in the shot-based video indexing structure. The proposed system firstly gets the key frame and motion information from each shot segmented by scene change detection method, and then extracts visual features and non-parametric based motion information from them. Finally, we construct real-time retrieval system supporting similarity comparison of these spatio-temporal features. After the normalized motion vector fields is created from MPEG compressed stream, the extraction of non-parametric based motion feature is effectively achieved by discretizing each normalized motion vectors into various angle bins, and considering a mean, a variance, and a direction of these bins. We use the edge-based spatial descriptor to extract the visual feature in key frames. Experimental evidence shows that our algorithm outperforms other video retrieval methods for image indexing and retrieval. To index the feature vectors, we use R*-tree structures.

The Variable Block-based Image Compression Technique using Wavelet Transform (웨이블릿 변환을 이용한 가변블록 기반 영상 압축)

  • 권세안;장우영;송광훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7B
    • /
    • pp.1378-1383
    • /
    • 1999
  • In this paper, an effective variable-block-based image compression technique using wavelet transform is proposed. Since the statistical property of each wavelet subband is different, we apply the adaptive quantization to each wavelet subband. In the proposed algorithm, each subband is divided into non-overlapping variable-sized blocks based on directional properties. In addition, we remove wavelet coefficients which are below a certain threshold value for coding efficiency. To compress the transformed data, the proposed algorithm quantizes the wavelet coefficients using scalar quantizer in LL subband and vector quantizers for other subbands to increase compression ratio. The proposed algorithm shows improvements in compression ratio as well as PSNR compared with the existing block-based compression algorithms. In addition, it does not cause any blocking artifacts in very low bit rates even though it is also a block-based method. The proposed algorithm also has advantage in computational complexity over the existing wavelet-based compression algorithms since it is a block-based algorithm.

  • PDF

A Novel Perceptual No-Reference Video-Quality Measurement With the Histogram Analysis of Luminance and Chrominance (휘도, 색차의 분포도 분석을 이용한 인지적 무기준법 영상 화질 평가방법)

  • Kim, Yo-Han;Sung, Duk-Gu;Han, Jung-Hyun;Shin, Ji-Tae
    • Journal of Broadcast Engineering
    • /
    • v.14 no.2
    • /
    • pp.127-133
    • /
    • 2009
  • With advances in video technology, many researchers are interested in video quality assessment to prove better performance of proposed algorithms. Since human visual system is too complex to be formulated exactly, many researches about video quality assessment are in progressing. No-reference video-quality assessment is suitable for various video streaming services, because of no requested additional data and network capacity to perform quality assessment. In this paper, we propose a novel no-reference video-quality assessment method with the estimation of dynamic range distortion. To measure the performance, we obtain mean opinion score (MOS) data by subject video quality test with the ITU-T P.910 Absolute Category Rating (ACR) method. And, we compare it with proposed algorithm using 363 video sequences. Experimental results show that the proposed algorithm has a higher correlation with obtained MOS.

Edge Enhanced Halftoning using Spatial Perceptual Properties of Human (인간의 공간 지각 특성을 이용한 에지 강조 컬러 해프토닝)

  • Kwak Nae-Joung;Chang Un-Dong;Song Young-Jun;Kim Dong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.3
    • /
    • pp.123-131
    • /
    • 2005
  • Among the digital halftoning, the error diffusion halftoning gives better subjective quality than other halftoning techniques. But it also makes edges of objects blurred. To overcome the defect, this paper proposes the modified error diffusion halftoning algorithm to enhance the edges using the spatial perceptual properties of the human visual system. Using the properties that the human eyes perceive not the pixel's luminance itself but the local average luminance and the information that human eyes perceive spatial variation, the proposed method computes information of edge enhancement(IEE). The IEE is added to the quantizer's input pixel and feeds into the halftoning quantizer. The quantizer produces the halftone image having the enhanced edge. Also this paper proposes the technique that the coefficients of the error diffusion filter are adapted according to the correlation among color components. The computer simulation results show that the proposed method produces finer halftoning images than conventional methods due to the enhanced edges. And the proposed method also preserves similar in edges to original image and reduces some defects such as color impulse and false contours.

  • PDF