• Title/Summary/Keyword: falsification

Search Result 87, Processing Time 0.03 seconds

The Consequences of Data Fabrication and Falsification among Researchers

  • KANG, Eungoo;HWANG, Hee-Joong
    • Journal of Research and Publication Ethics
    • /
    • v.1 no.2
    • /
    • pp.7-10
    • /
    • 2020
  • Purpose: The experience by a researcher highlighted steps is guided by a specific ethical codes of conduct. The purpose of the current study is to discuss the fabrication and falsification of data as the key ethical misconduct committed by many researchers focus on their causes and impact in the research field. Research design, data and methodology: To obtain suitable textual resource, the current study used content analysis to closely take a look at the fabrication and falsification based on prior research in the realm of publication ethics. As a result, the current authors could collect and understand adequate textual data from appropriate prior resources. Results: The Research misconduct is a common practice in different countries across the world. Based on the findings from this study, data fabrication or falsification have a grievous impact on all the stakeholders of a study. The unethical behavior affects the parties concerned both psychologically and financially. Conclusions: It is, therefore, recommended that researchers should be held accountable. This can be done through different means, including raising awareness of vulnerability to data fabrication and falsification. The government and research institute should also advocate for effective policies guiding research studies across the world.

Fax Sender Verification Technique Based on Pattern Analysis for Preventing Falsification of FAX Documents (팩스 문서 위·변조 방지를 위한 패턴 분석 기반의 팩스 송신처 검증 기법)

  • Kim, Youngho;Choi, Hwangkyu
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.547-558
    • /
    • 2014
  • Recently, in the course of business processes a variety of abuse cases of fax documents is common in general corporate, government, and financial institutions. To solve this problem, it is necessary for a technique to prevent falsification of fax documents. In this paper, we propose a new fax sender verification technique based on pattern analysis to prevent falsification of fax documents only using the received fax document. In the proposed technique, the fax sender is verified by analyzing the communication signal patterns between the fax sender and receiver and image pattern in the received fax document. In this paper, we conduct the experiments that apply our technique to real-world fax systems, and then tamper-proof effects were confirmed from the experimental results.

A Study on Website Forgery/Falsification Detection Technique using Images (이미지를 이용한 웹사이트 위·변조 탐지 기법 연구)

  • Shin, JiYong;Cho, Jiho;Lee, Han;Kim, JeongMin;Lee, Geuk
    • Convergence Security Journal
    • /
    • v.16 no.1
    • /
    • pp.81-87
    • /
    • 2016
  • In this paper, we propose a forgery/falsification detection technique of web site using the images. The proposed system captures images of the web site when a user accesses to the forgery/falsification web site that has the financial information deodorizing purpose. The captured images are compared with those of normal web site images to detect forgery/falsification. The proposed system calculates similarity factor of normal site image with captured one to detect whether the site is normal or not. If it is determined as normal, analysis procedure is finished. But if it is determined as abnormal, a message informs the user to prevent additional financial information spill and further accidents from the forgery web site.

The Computer Monitor's Image Evaluated at The Target of The Falsification According to The New Conception of The Falsification Made by Regarding the Reproduced Document as The Document of Document crime (복사문서의 문서간주가 창출한 새로운 변조개념에 의해 문서변조행위대상으로 평가되는 컴퓨터모니터 이미지)

  • Ryu, Seok-Jun
    • Journal of Legislation Research
    • /
    • no.44
    • /
    • pp.725-756
    • /
    • 2013
  • In this paper, the possibility of extension of falsification conception was investigated to discuss the validity of this precedent. Consequently this extension was indispensible according to the article 237-2 of criminal code which regards the reproduced document as the document in the document crime. However, this is against the security of human right. On the contrary, there is not this kind of article in the German criminal code and the German precedents and majority theory are negative to regard it as the criminal document. And also, there is the pont of view that the reproduced document is not the criminal document because it's not the expression itself of document nominee's intention, so the article 237-2 should be demolished in Korea. According to this opinion, the serious reconsideration should be required in the legislation of this article 237-2. Nevertheless, if this extended conception is needed and it's possible, the meaning of the computer monitor's image is not able to be ignored in the conception of falsification. Therefore this should be regarded as the element of the falsification conception. In other words, this can't be evaluated as the object of falsification but the target of falsification, according to the conception extension, though the precedents do not regard it as the document in document crime.

Website Falsification Detection System Based on Image and Code Analysis for Enhanced Security Monitoring and Response (이미지 및 코드분석을 활용한 보안관제 지향적 웹사이트 위·변조 탐지 시스템)

  • Kim, Kyu-Il;Choi, Sang-Soo;Park, Hark-Soo;Ko, Sang-Jun;Song, Jung-Suk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.5
    • /
    • pp.871-883
    • /
    • 2014
  • New types of attacks that mainly compromise the public, portal and financial websites for the purpose of economic profit or national confusion are being emerged and evolved. In addition, in case of 'drive by download' attack, if a host just visits the compromised websites, then the host is infected by a malware. Website falsification detection system is one of the most powerful solutions to cope with such cyber threats that try to attack the websites. Many domestic CERTs including NCSC (National Cyber Security Center) that carry out security monitoring and response service deploy it into the target organizations. However, the existing techniques for the website falsification detection system have practical problems in that their time complexity is high and the detection accuracy is not high. In this paper, we propose website falsification detection system based on image and code analysis for improving the performance of the security monitoring and response service in CERTs. The proposed system focuses on improvement of the accuracy as well as the rapidity in detecting falsification of the target websites.

An Efficient Falsification Algorithm for Logical Expressions in DNF (DNF 논리식에 대한 효율적인 반증 알고리즘)

  • Moon, Gyo-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.662-668
    • /
    • 2001
  • Since the problem of disproving a tautology is as hard as the problem of proving it, no polynomial time algorithm for falsification(or testing invalidity) is feasible. Previous algorithms are mostly based on either divide-and-conquer or graph representation. Most of them demonstrated satisfactory results on a variety of input under certain constraints. However, they have experienced difficulties dealing with big input. We propose a new falsification algorithm using a Merge Rule to produce a counterexample by constructing a minterm which is not satisfied by an input expression in DNF(Disjunctive Normal Form). We also show that the algorithm is consistent and sound. The algorithm is based on a greedy method which would seek to maximize the number or terms falsified by the assignment made at each step of the falsification process. Empirical results show practical performance on big input to falsify randomized nontautological problem instances, consuming O(nm$^2$) time, where n is the number of variables and m is number of terms.

  • PDF

Mitigation of Adverse Effects of Malicious Users on Cooperative Spectrum Sensing by Using Hausdorff Distance in Cognitive Radio Networks

  • Khan, Muhammad Sajjad;Koo, Insoo
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.74-80
    • /
    • 2015
  • In cognitive radios, spectrum sensing plays an important role in accurately detecting the presence or absence of a licensed user. However, the intervention of malicious users (MUs) degrades the performance of spectrum sensing. Such users manipulate the local results and send falsified data to the data fusion center; this process is called spectrum sensing data falsification (SSDF). Thus, MUs degrade the spectrum sensing performance and increase uncertainty issues. In this paper, we propose a method based on the Hausdorff distance and a similarity measure matrix to measure the difference between the normal user evidence and the malicious user evidence. In addition, we use the Dempster-Shafer theory to combine the sets of evidence from each normal user evidence. We compare the proposed method with the k-means and Jaccard distance methods for malicious user detection. Simulation results show that the proposed method is effective against an SSDF attack.

Enhanced Robust Cooperative Spectrum Sensing in Cognitive Radio

  • Zhu, Feng;Seo, Seung-Woo
    • Journal of Communications and Networks
    • /
    • v.11 no.2
    • /
    • pp.122-133
    • /
    • 2009
  • As wireless spectrum resources become more scarce while some portions of frequency bands suffer from low utilization, the design of cognitive radio (CR) has recently been urged, which allows opportunistic usage of licensed bands for secondary users without interference with primary users. Spectrum sensing is fundamental for a secondary user to find a specific available spectrum hole. Cooperative spectrum sensing is more accurate and more widely used since it obtains helpful reports from nodes in different locations. However, if some nodes are compromised and report false sensing data to the fusion center on purpose, the accuracy of decisions made by the fusion center can be heavily impaired. Weighted sequential probability ratio test (WSPRT), based on a credit evaluation system to restrict damage caused by malicious nodes, was proposed to address such a spectrum sensing data falsification (SSDF) attack at the price of introducing four times more sampling numbers. In this paper, we propose two new schemes, named enhanced weighted sequential probability ratio test (EWSPRT) and enhanced weighted sequential zero/one test (EWSZOT), which are robust against SSDF attack. By incorporating a new weight module and a new test module, both schemes have much less sampling numbers than WSPRT. Simulation results show that when holding comparable error rates, the numbers of EWSPRT and EWSZOT are 40% and 75% lower than WSPRT, respectively. We also provide theoretical analysis models to support the performance improvement estimates of the new schemes.

Secure Cooperative Sensing Scheme for Cognitive Radio Networks (인지 라디오 네트워크를 위한 안전한 협력 센싱 기법)

  • Kim, Taewoon;Choi, Wooyeol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.8
    • /
    • pp.877-889
    • /
    • 2016
  • In this paper, we introduce the basic components of the Cognitive Radio Networks along with possible threats. Specifically, we investigate the SSDF (Spectrum Sensing Data Falsification) attack which is one of the easiest attack to carry out. Despite its simplicity, the SSDF attack needs careful attention in order to build a secure system that resists to it. The proposed scheme utilizes the Anomaly Detection technique to identify malicious users as well as their sensing reports. The simulation results shows that the proposed scheme can effectively detect erroneous sensing reports and thus result in correct detection of the active primary users.

Publication Ethics and KODISA Journals

  • KIM, Dongho;YOUN, Myoung-Kil
    • Journal of Research and Publication Ethics
    • /
    • v.1 no.2
    • /
    • pp.1-5
    • /
    • 2020
  • Purpose: The purpose of this paper is to identify the most common misconducts in publication ethics, to demonstrate KODISA journals' management of the misconducts, and to share the findings with future and potential authors of Journal of Research and Publication Ethics (JRPE). Research design, data and methodology: This is an analytical study that explores and examines research and publication ethics and misconducts. Results: Based on literature review, major publication misconducts that many academic journals had to contend with over the years encompass unethical authorship, including ghost, guest, and gift authorships, data falsification and fabrication, plagiarism, including self-plagiarism, submission and publication fraud (multiple submission and publication), and potential conflicts of interest. Conclusions: KODISA and its journals have strived and done great work in making the journals transparent and in combatting the issues associated with plagiarism, including self-plagiarism. However, it seems there is no mechanism to detect or deter unethical authorship, conflicts of interest, and fabrication and falsification misconducts. The inception of JRPE signifies how KODISA and its journals continuously view research and publication ethics as their foremost important factor in maintaining and improving the academic journals. The future research and scholastic manuscripts of JRPE could provide necessary and updated information about research and publication ethics, practices, and misconducts.