• Title/Summary/Keyword: technology risk

Search Result 4,957, Processing Time 0.034 seconds

Usefulness of Data Mining in Criminal Investigation (데이터 마이닝의 범죄수사 적용 가능성)

  • Kim, Joon-Woo;Sohn, Joong-Kweon;Lee, Sang-Han
    • Journal of forensic and investigative science
    • /
    • v.1 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • Data mining is an information extraction activity to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results. Typical applications include market segmentation, customer profiling, fraud detection, evaluation of retail promotions, and credit risk analysis. Law enforcement agencies deal with mass data to investigate the crime and its amount is increasing due to the development of processing the data by using computer. Now new challenge to discover knowledge in that data is confronted to us. It can be applied in criminal investigation to find offenders by analysis of complex and relational data structures and free texts using their criminal records or statement texts. This study was aimed to evaluate possibile application of data mining and its limitation in practical criminal investigation. Clustering of the criminal cases will be possible in habitual crimes such as fraud and burglary when using data mining to identify the crime pattern. Neural network modelling, one of tools in data mining, can be applied to differentiating suspect's photograph or handwriting with that of convict or criminal profiling. A case study of in practical insurance fraud showed that data mining was useful in organized crimes such as gang, terrorism and money laundering. But the products of data mining in criminal investigation should be cautious for evaluating because data mining just offer a clue instead of conclusion. The legal regulation is needed to control the abuse of law enforcement agencies and to protect personal privacy or human rights.

  • PDF

Analysis of the Effect of the Revised Ground Amplification Factor on the Macro Liquefaction Assessment Method (개정된 지반증폭계수의 Macro적 액상화 평가에 미치는 영향 분석)

  • Baek, Woo-Hyun;Choi, Jae-Soon
    • Journal of the Korean Geotechnical Society
    • /
    • v.36 no.2
    • /
    • pp.5-15
    • /
    • 2020
  • The liquefaction phenomenon that occurred during the Pohang earthquake (ML=5.4) brought new awareness to the people about the risk of liquefaction caused by the earthquake. Liquefaction hazard maps with 2 km grid made in 2014 used more than 100,000 borehole data for the whole country, and regions without soil investigation data were produced using interpolation. In the mapping of macro liquefaction hazard for the whole country, the site amplification effect and the ground water level 0 m were considered. Recently, the Ministry of Public Administration and Security (2018) published a new site classification method and amplification coefficient of the common standard for seismic design. Therefore, it is necessary to rewrite the liquefaction hazard map reflecting the revised amplification coefficient. In this study, the results of site classification according to the average shear wave velocity in soils before and after revision were compared in the whole country. Also, liquefaction assessment results were compared in Gangseo-gu, Busan. At this time, two ground accelerations corresponding to the 500 and 1,000 years of return period and two ground water table, 5 m for the average condition and 0 m the extreme condition were applied. In the drawing of liquefaction hazard map, a 500 m grid was applied to secure a resolution higher than the previous 2 km grid. As a result, the ground conditions that were classified as SC and SD grounds based on the existing site classification standard were reclassified as S2, S3, and S4 through the revised site classification standard. Also, the result of the Liquefaction assessments with a return period of 500 years and 1,000 years resulted in a relatively overestimation of the LPI applied with the ground amplification factor before revision. And the results of this study have a great influence on the liquefaction assessment, which is the basis of the creation of the regional liquefaction hazard map using the amplification factor.

Comparison of Perception Differences About Nuclear Energy in 4 East Asian Country Students: Aiming at $10^{th}$ Grade Students who Participated in Scientific Camps, from Four East Asian Countries: Korea, Japan, Taiwan, and Singapore (동아시아 4개국 학생들의 핵에너지에 대한 인식 비교: 과학캠프에 참가한 한국, 일본, 대만, 싱가포르 10학년 학생들을 대상으로)

  • Lee, Hyeong-Jae;Park, Sang-Tae
    • Journal of The Korean Association For Science Education
    • /
    • v.32 no.4
    • /
    • pp.775-788
    • /
    • 2012
  • This study was done at a scientific camp sponsored by Nara Women's University Secondary School, Japan. In this school, $10^{th}$ grade students from 4 East Asian countries: Korea, Japan, Taiwan, and Singapore, participated. We made a research on students' perceptions about nuclear energy. Sample populations include 77 students in total, with 12 Korean, 46 Japanese, 9 Taiwanese and 10 Singaporean students. Overall perceptions comparison about nuclear energy shows average values from the order of highest Korea, Taiwan, Singapore, and to lowest, Japan. We implemented a T-test to identify perception differences about nuclear energy, with one group that include 3 countries (Korea, Taiwan and Singapore) and another group that includes all the Japanese students. T-test results of perceptions about nuclear energy shows students from the 3 countries of Korea, Taiwan and Singapore having higher average than Japanese students. (p<.05). Korean average scores regarding overall perceptions about nuclear energy show as the highest in all 4 East Asian countries and also highest in all subcategories. On the contrary in Japan, they have lower and negative perceptions of nuclear energy. In spite of these facts, perceptions of Japanese students about nuclear energy seem lowest and negative mainly because of the recent Fukushima nuclear power plant disaster, caused by the tsunami and its subsequent damages and fears of radiation leaks, etc. This shows that negative information about future disasters and its resulting damages like the Chernobyl nuclear accident could influence more on people's risk perception than general information like nuclear energy-related technologies or the news that the plant is operating normally, etc. Even if the possibility of this kind of accident is very low, just one accident could bring abnormal risks to technology itself. This strong signal makes negative image and strengthens its perceptions to the people. This could bring a stigma about nuclear energy. This study shows that Government's policy about the highest priority for nuclear energy safety is most important. As long as such perception and decision are fixed, we found that it might not be easy to get changed again because they were already fortified and maintained.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Study on Clinical Variables Contributing to Differentiation of Delirium and Non-Delirium Patients in the ICU (중환자실 섬망 환자와 비섬망 환자 구분에 기여하는 임상 지표에 관한 연구)

  • Ko, Chanyoung;Kim, Jae-Jin;Cho, Dongrae;Oh, Jooyoung;Park, Jin Young
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.27 no.2
    • /
    • pp.101-110
    • /
    • 2019
  • Objectives : It is not clear which clinical variables are most closely associated with delirium in the Intensive Care Unit (ICU). By comparing clinical data of ICU delirium and non-delirium patients, we sought to identify variables that most effectively differentiate delirium from non-delirium. Methods : Medical records of 6,386 ICU patients were reviewed. Random Subset Feature Selection and Principal Component Analysis were utilized to select a set of clinical variables with the highest discriminatory capacity. Statistical analyses were employed to determine the separation capacity of two models-one using just the selected few clinical variables and the other using all clinical variables associated with delirium. Results : There was a significant difference between delirium and non-delirium individuals across 32 clinical variables. Richmond Agitation Sedation Scale (RASS), urinary catheterization, vascular catheterization, Hamilton Anxiety Rating Scale (HAM-A), Blood urea nitrogen, and Acute Physiology and Chronic Health Examination II most effectively differentiated delirium from non-delirium. Multivariable logistic regression analysis showed that, with the exception of vascular catheterization, these clinical variables were independent risk factors associated with delirium. Separation capacity of the logistic regression model using just 6 clinical variables was measured with Receiver Operating Characteristic curve, with Area Under the Curve (AUC) of 0.818. Same analyses were performed using all 32 clinical variables;the AUC was 0.881, denoting a very high separation capacity. Conclusions : The six aforementioned variables most effectively separate delirium from non-delirium. This highlights the importance of close monitoring of patients who received invasive medical procedures and were rated with very low RASS and HAM-A scores.

Agricultural Policies and Geographical Specialization of Farming in England (영국의 농업정책이 지리적 전문화에 미친 영향 연구)

  • Kim, Ki-Hyuk
    • Journal of the Korean association of regional geographers
    • /
    • v.5 no.1
    • /
    • pp.101-120
    • /
    • 1999
  • The purpose of this study is to analyze the impact of agricultural polices on the change of regional structure based on the specialization during the productivism period. Analysis are carried on through the comparison of distribution in 1950s and 1997. Since the 1950s, governmental policy has played a leading role in shaping the pattern of farming in Great Britain. The range of British measures have also been employed in an attempt to improve the efficiency of agriculture and raise farm income. Three fairly distinct phase can be identified in the developing relationship between government policies and British agriculture in the postwar period. In the 1st phase, The Agricultural Act of 1947 laid the foundations for agricultural productivism in Great Britain until membership of the EC. This was to be achieved through the system of price support and guaranteed prices and the means of a series of grants and subsidies. Guaranteed prices encouraged farmenrs to intensify production and specialize in either cereal farming or milk-beef enterprise. The former favoured eastern areas, whereas the latter favoured western areas. Various grants and subsidies were made available to farmers during this period, again as a way of increasing efficiency and farm incomes. Many policies, such as Calf Subsidy and the Ploughing Grant, Hill cow and Hill Sheep Schemes and the Hill Farming and Livestock Rearing Grant was provided. Some of these policies favoured western uplands, whilst the others was biased towards the Lake District. Concentration of farms occured especially in near the London Metropolitan Area and south part of Scotland. In the 2nd stage after the membership of EC, very high guaranteed price created a relatively risk-free environment, so farmers intensified production and levels of self-sufficiency for most agriculture risen considerably. As farmers were being paid high prices for as much as they could produce, the policy favoured areas of larger-scale farming in eastern Britain. As a result of increasing regional disparities in agriculture, the CAP became more geographically sensitive in 1975 with the setting up of the Less Favoured Areas(LFAs). But they are biased towards the larger farms, because such farms have more crops and/or livestock, but small farms with low incomes are in most need of support. Specialization of cereals such wheat and barely was occured, but these two cereal crops have experienced rather different trend since 1950s. Under the CAP, farmers have been paid higher guaranteed prices for wheat than for barely because of the relative shortage of wheat in the EC. And more barely were cultivated as feedstuffs for livestock by home-grown cereals. In the 1950s dairying was already declining in what was to become the arable areas of southern and eastern England. By the mid-1980s, the pastral core had maintained its dominance, but the pastoral periphery had easily surpassed arable England as the second most important dairying district. Pig farming had become increasingly concentrated in intensive units in the main cereal areas of eastern England. These results show that the measure of agricultural policy induced the concentration and specialization implicitly. Measures for increasing demand, reducing supply or raising farm incomes are favoured by large scale farming. And price support induced specialization of farming. And technology for specialization are diffused and induced geographical specialization. This is the process of change of regional structure through the specialization.

  • PDF

Residue analysis of penicillines in livestock and marine products (국내 유통 축·수산물 중 페니실린계 동물용의약품에 대한 잔류실태조사)

  • Song, Ji-Young;Hu, Soo-Jung;Joo, Hyun-Jin;Kim, Mi-Ok;Hwang, Joung-Boon;Han, Yoon-Jung;Kwon, Yu-Jihn;Kang, Shin-Jung;Cho, Dae-Hyun
    • Analytical Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.257-264
    • /
    • 2012
  • Penicillins belong to the ${\beta}$-lactam class of antibiotics, and are frequently used in human and veterinary medicine. Despite the positive effects of these drugs, improper use of penicillins poses a potential health risk to consumers. This study has been undertaken to determinate multi-residues of penicillins, including amoxicillin, ampicillin, oxacillin, bezylpenicillin, cloxacillin, dicloxacillin, and nafcillin, using liquid chromatographic tandem mass spectrometer (LC-MS/MS). The developed method was validated for specificity, precision, recovery, and linearity in livestock and marine products. The analytes were extracted with 80% acetonitrile and clean-up by a single reversed-phase solid-phase extraction step. Six penicillins presented recoveries higher than 76% with the exception of Amoxicillin. Relative standard deviations (RSDs) were not more than 10%. The method was applied to 225 real samples. Benzylpenicillin was detected in 12 livestock products and 7 marine products. Amoxicillin, ampicillin, cloxacilllin, dicloxacillin, nafcillin and oxacillin were not detected. The detected levels were 0.001~0.009 mg/kg in livestock products excluding eggs and milk. In marine products, the detected levels were under 0.03 mg/kg. They were under the MRL levels. As monitoring results, it is identified to be safe but it is considered that safety management of antibiotics should continue by monitoring.

Toxicity Test of Carbosulfan and Phenthoate on Killifish (Carbosulfan과 Phenthoate의 송사리(Oryzias latipes, Medaka)에 대한 독성시험)

  • Bae, Chul-Han;Lee, Jeong-Seok;Cho, Kyung-Won;Park, Hyun-Ju;Cho, Dong-Hun;Shin, Kwan-Seop;Jung, Chang-Kook;Park, Yeon-Ki
    • The Korean Journal of Pesticide Science
    • /
    • v.8 no.4
    • /
    • pp.309-318
    • /
    • 2004
  • Acute toxicity test and chronic toxicity test were conducted with killifish (Oryzias latipes, Medaka) to evaluate toxicity effect of pesticides. Acute toxicity test was investigated mortality in 48 hours and 96 hours after treatment, chronic toxicity test was examined with the early life stage of 30 days after hatching be started embryos of Medaka. The test substances were two pesticides, Carbosulfan and Phenthoate, applied to the paddy rice plant and well-known to the high fish toxicity. As the result of acute toxicity test, median concentration $(LC_{50})$ at 96 hours in Medata was Carbosulfan 0.102 mg/L and Phenthoate 0.167 mg/L, and Fish early life stage toxicity test was conducted on basis of the result of acute toxicity test and concluded from the investigation of hatching success, period of hatching, survival post hatching, length and weight of surviving fishes and abnormal fish. The results of early life stage toxicity test were represented by no observed effect concentration (NOEC), lowest observed effect concentration (LOEC) and maximum acceptable toxicant concentration (MATC). NOEC was Carbosulfan 0.0067ppm and Phenthoate 0.011ppm, LOEC of PCP-Na, Carbosulfan and Phenthoate were 0.017ppm and 0.029ppm, MATC of Carbosulfan and Phenthoate were 0.011ppm and 0.018ppm. These studies will be expected to supply more varied chronic toxicity effects at lower concentration than acute toxicity test. Therefore, evaluation data will be more realistic and the risk assessment of pesticide will be leveled up.

Nuclear Terrorism and Global Initiative to Combat Nuclear Terrorism(GICNT): Threats, Responses and Implications for Korea (핵테러리즘과 세계핵테러방지구상(GICNT): 위협, 대응 및 한국에 대한 함의)

  • Yoon, Tae-Young
    • Korean Security Journal
    • /
    • no.26
    • /
    • pp.29-58
    • /
    • 2011
  • Since 11 September 2001, warnings of risk in the nexus of terrorism and nuclear weapons and materials which poses one of the gravest threats to the international community have continued. The purpose of this study is to analyze the aim, principles, characteristics, activities, impediments to progress and developmental recommendation of the Global Initiative to Combat Nuclear Terrorism(GICNT). In addition, it suggests implications of the GICNT for the ROK policy. International community will need a comprehensive strategy with four key elements to accomplish the GICNT: (1) securing and reducing nuclear stockpiles around the world, (2) countering terrorist nuclear plots, (3) preventing and deterring state transfers of nuclear weapons or materials to terrorists, (4) interdicting nuclear smuggling. Moreover, other steps should be taken to build the needed sense of urgency, including: (1) analysis and assessment through joint threat briefing for real nuclear threat possibility, (2) nuclear terrorism exercises, (3) fast-paced nuclear security reviews, (4) realistic testing of nuclear security performance to defeat insider or outsider threats, (5) preparing shared database of threats and incidents. As for the ROK, main concerns are transfer of North Korea's nuclear weapons, materials and technology to international terror groups and attacks on nuclear facilities and uses of nuclear devices. As the 5th nuclear country, the ROK has strengthened systems of physical protection and nuclear counterterrorism based on the international conventions. In order to comprehensive and effective prevention of nuclear terrorism, the ROK has to strengthen nuclear detection instruments and mobile radiation monitoring system in airports, ports, road networks, and national critical infrastructures. Furthermore, it has to draw up effective crisis management manual and prepare nuclear counterterrorism exercises and operational postures. The fundamental key to the prevention, detection and response to nuclear terrorism which leads to catastrophic impacts is to establish not only domestic law, institution and systems, but also strengthen international cooperation.

  • PDF

Relationship Between Carotid Intima-Media Thickness Using Ultrasonography and Diagnostic Indices of Metabolic Syndrome (초음파를 이용한 경동맥 내-중막 두께와 대사증후군 진단지표의 연관성)

  • Ko, Kyung-Sun;Heo, Kyung-Hwa;Won, Yong-Lim;Lee, Sung-Kook;Kim, Ki-Woong
    • Journal of radiological science and technology
    • /
    • v.32 no.3
    • /
    • pp.285-291
    • /
    • 2009
  • The aim of the present study was undertaken to investigate the association between diagnostic indices of metabolic syndrome(MetS) with carotid intima-media thickness using ultrasonography. The participants in the study were 315 male employees without carotid atherosclerosis and other cardiovascular disease. This study was approved by the Institutional Review Board of Occupational Safety and Health Research Institute. Written informed consent for the participants in this study was obtained from all individuals. Anthropometric parameters and biochemical characteristics were done using each specific equipments and the NCEP-ATP III criteria were used to define MetS. They were examined by B-mode ultrasound to measure the carotid intima-media thickness(carotid IMT) at the near and far walls of common carotid and bifurcation(bulb). The mean carotid IMT was $0.739{\pm}0.137\;mm$ and it's thickness significantly increased with the increase in age. Also, amounts of systolic and diastolic blood pressure, triglyceride and fasting glucose were significantly increased with the increase in age. Carotid IMT were significantly correlated with BMI(r=0.170, p=0.004), systolic(r=0.148, p=0.011) and diastolic blood pressure(r=0.123, p=0.036) and HDL-cholesterol(r=-0.164, p=0.005). On multiple logistic regression analysis for the diagnostic indices of MetS, carotid IMT were significantly associated with blood pressure(OR=4.220, p<0.01) and MetS(OR=1.301, p<0.05). The results indicate that blood pressure and MetS are important risk factors for carotid atherosclerosis.

  • PDF