• Title/Summary/Keyword: 단계선택법

Search Result 292, Processing Time 0.269 seconds

A Study on the 'fragmentation' trend of modern film montage (현대영화 몽타주의 '파편화(fragmentation)' 경향 연구)

  • LEE, Jiyoung
    • Trans-
    • /
    • v.3
    • /
    • pp.29-53
    • /
    • 2017
  • The film scholar Vincent Amiel divides into three types of montage through his book The Aesthetics of Montage ; Montage narratif, Montage discursif, and Montage decorrespondances. These three categories are the concept that encompasses the aesthetic class to which most movies belong. Early films pursued the essential and basic functions of editing, which tend to be modified in the direction of enhancing the director's goals over time. In this way, "Expressive Montage" is one of most important concepts of montage, not as a 'methodology' that combines narrative but as a 'purpose'. In the montage stage, the expressive montage work is done through three steps of decision. The process of 'combining' to combine the selected films in a certain order, after the process of 'selection' which selects only necessary parts of the rush film, and 'connection' to determine the scene connection considering the duration of the shot. The connection is the final stage of the montage. There are exceptions, of course. When fiction films of classical narratives use close-ups, or when using models or objects of neutered animals, the film induces the tendency of a "montage decorrespondances" rather than a "montage narratif" or "montage discursif". This study attempts to analyze the tendency of montage of works with 'uncertain connection' through 'collage' used by close-ups and montage decorrespondances as 'fragmentation tendency of modern films'. The fragmentation of the montage in contemporary film breaks the continuous and structural nature of the film, and confuses the narration structure that is visible on the surface of the film. The tendency of the fragmentation of the montage, which started from this close-up, seems to give an answer to the extensibility of the modern image.

  • PDF

Prediction of Maximal Oxygen Uptake Ages 18~34 Years (18~34 남성의 최대산소 섭취량 추정)

  • Jeon, Yoo-Joung;Im, Jae-Hyeng;Lee, Byung-Kun;Kim, Chang-Hwan;Kim, Byeong-Wan
    • 한국체육학회지인문사회과학편
    • /
    • v.51 no.3
    • /
    • pp.373-382
    • /
    • 2012
  • The purpose of this study is to predict VO2max with body index and submaximal metabolic responses. The subjects are consisted of 250 male aging from 18 to 34 and we separated them into two groups randomly; 179 for a sample, 71 for a cross-validation group. They went through maximal exercise testing with Bruce protocol, and we measured the metabolic responses in the end of the first(3 minute) and second stage(6 minute). To predict VO2max, we applied multiple regression analysis to the sample with stepwise method. Model 1's variables are weight, 6 minute HR and 6 minute VO2(R=0.64, SEE=4.74, CV=11.7%, p<.01), and the equation is VO2max(ml/kg/min)= 72.256-0.340(Weight)-0.220(6minHR)+0.013(6minVO2). Model 2's variables are weight, 6 minute HR, 6 minute VO2, and 6 minute VCO2(R=0.67, SEE=4.59, CV=11.3%, p<.01), and the equation is VO2max(ml/kg/min)= 68.699-0.277(Weight) -0.206(6minHR)+0.020(6minVO2)-0.009(6minVCO2). And the result did not show multicolinearity for both models. Model 2 demonstrated more correlation compared to Model 1. However, when we conducted cross-validation of those models with 71 men, measured VO2max and estimated VO2 Max had statistical significance with correlation (R=0.53, 0.56, P<.01). Although both models are functional with validity considering their simplicity and utility, Model 2 has more accuracy.

Rapid Detection Method of Avian Influenza Subtype H5N1 using Quick Real-Time PCR (Quick Real-time PCR을 이용한 Avian Influenza Virus Subtype H5N1의 신속검출법)

  • Kim, Eul-Hwan;Lee, Dong-Woo;Han, Sang-Hoon;Kwon, Soon-Hwan;Yoon, Byoung-Su
    • Korean Journal of Microbiology
    • /
    • v.43 no.1
    • /
    • pp.23-30
    • /
    • 2007
  • The most rapid Real-time PCR based detection method for Avian influenza A virus (AIV) subtype H5N1 was developed. The target DNA sequence in this study was deduced from H5N1 subtype-specific 387 bp partial gene of hemagglutinin, and was synthesized by using PCR-based gene synthesis on the ground of safety. Real-Time PCR was performed by $GenSpector^{TM}$ using microchip-based, total $1{\mu}l$ of reaction mixture with extremely short time in each steps in PCR. The detection including PCR-amplication and analysis of melting temperature was totally completed within 13 min. The H5N1-specific 189 bp PCR product was correctly amplified until 2.4 molecules of hemagglutinin gene as minimum of templates. This kind of PCR was designated as Quick Real-Time PCR in this study and it could be applied to detect not only AIV H5N1, but also other pathogens using PCR-based detection.

Application of Molecular Diagnostics Technology in the Development of a Companion Diagnostics for Malignant Solid Tumors (악성 고형암의 항암제 동반진단 기술에서 분자진단기술의 적용)

  • Kim, Jin-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.3
    • /
    • pp.365-374
    • /
    • 2019
  • Unlike benign tumors, malignant tumors are capable of metastasis, easy to relapse, poor survival, and low quality of life. In Korea, here is a tendency to treat the tumors collectively according to the General Principles of Cancer Chemotherapy(GPCC) of the Health Insurance Review & Assessment Service (HIRA). But recently, companion diagnostics(CDx) is recommended rather than unilateral medication because biomarker-based molecular diagnostics is possible to predict the drug response of patients before drug treatment. Not only domestic but also overseas Food and Drug Administratio (FDA) recommends the development of the CDx system at the stage of drug development to ensure the responsiveness and safety of medicines. In this study, I focused on the necessity of CDx development direction as well as CDx development status through literature review. Furthermore I also discussed CDx types according to the molecular diagnostic technology such as immunohistochemistry (IHC), polymerase chain reaction (PCR), in situ hybridization (ISH), and next-generation sequencing (NGS) not only in the approved CDx but also in the developing one by US FDA. And I suggested the technology issue of CDx development process such as a selection of molecular diagnostics at the time of release, a clear understanding of the CDx mechanism, and a convergence of drug with CDx development. The necessity of social insurance system also was proposed for CDx development.

Key Methodologies to Effective Site-specific Accessment in Contaminated Soils : A Review (오염토양의 효과적 현장조사에 대한 주요 방법론의 검토)

  • Chung, Doug-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.32 no.4
    • /
    • pp.383-397
    • /
    • 1999
  • For sites to be investigated, the results of such an investigation can be used in determining foals for cleanup, quantifying risks, determining acceptable and unacceptable risk, and developing cleanup plans t hat do not cause unnecessary delays in the redevelopment and reuse of the property. To do this, it is essential that an appropriately detailed study of the site be performed to identify the cause, nature, and extent of contamination and the possible threats to the environment or to any people living or working nearby through the analysis of samples of soil and soil gas, groundwater, surface water, and sediment. The migration pathways of contaminants also are examined during this phase. Key aspects of cost-effective site assessment to help standardize and accelerate the evaluation of contaminated soils at sites are to provide a simple step-by-step methodology for environmental science/engineering professionals to calculate risk-based, site-specific soil levels for contaminants in soil. Its use may significantly reduce the time it takes to complete soil investigations and cleanup actions at some sites, as well as improve the consistency of these actions across the nation. To achieve the effective site assessment, it requires the criteria for choosing the type of standard and setting the magnitude of the standard come from different sources, depending on many factors including the nature of the contamination. A general scheme for site-specific assessment consists of sequential Phase I, II, and III, which is defined by workplan and soil screening levels. Phase I are conducted to identify and confirm a site's recognized environmental conditions resulting from past actions. If a Phase 1 identifies potential hazardous substances, a Phase II is usually conducted to confirm the absence, or presence and extent, of contamination. Phase II involve the collection and analysis of samples. And Phase III is to remediate the contaminated soils determined by Phase I and Phase II. However, important factors in determining whether a assessment standard is site-specific and suitable are (1) the spatial extent of the sampling and the size of the sample area; (2) the number of samples taken: (3) the strategy of taking samples: and (4) the way the data are analyzed. Although selected methods are recommended, application of quantitative methods is directed by users having prior training or experience for the dynamic site investigation process.

  • PDF

Automatic velocity analysis using bootstrapped differential semblance and global search methods (고해상도 속도스펙트럼과 전역탐색법을 이용한 자동속도분석)

  • Choi, Hyung-Wook;Byun, Joong-Moo;Seol, Soon-Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.31-39
    • /
    • 2010
  • The goal of automatic velocity analysis is to extract accurate velocity from voluminous seismic data with efficiency. In this study, we developed an efficient automatic velocity analysis algorithm by using bootstrapped differential semblance (BDS) and Monte Carlo inversion. To estimate more accurate results from automatic velocity analysis, the algorithm we have developed uses BDS, which provides a higher velocity resolution than conventional semblance, as a coherency estimator. In addition, our proposed automatic velocity analysis module is performed with a conditional initial velocity determination step that leads to enhanced efficiency in running time of the module. A new optional root mean square (RMS) velocity constraint, which prevents picking false peaks, is used. The developed automatic velocity analysis module was tested on a synthetic dataset and a marine field dataset from the East Sea, Korea. The stacked sections made using velocity results from our algorithm showed coherent events and improved the quality of the normal moveout-correction result. Moreover, since our algorithm finds interval velocity ($\nu_{int}$) first with interval velocity constraints and then calculates a RMS velocity function from the interval velocity, we can estimate geologically reasonable interval velocities. Boundaries of interval velocities also match well with reflection events in the common midpoint stacked sections.

A Step-by-Step Primality Test (단계적 소수 판별법)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.103-109
    • /
    • 2013
  • Miller-Rabin method is the most prevalently used primality test. However, this method mistakenly reports a Carmichael number or semi-prime number as prime (strong lier) although they are composite numbers. To eradicate this problem, it selects k number of m, whose value satisfies the following : m=[2,n-1], (m,n)=1. The Miller-Rabin method determines that a given number is prime, given that after the computation of $n-1=2^sd$, $0{\leq}r{\leq}s-1$, the outcome satisfies $m^d{\equiv}1$(mod n) or $m^{2^rd}{\equiv}-1$(mod n). This paper proposes a step-by-step primality testing algorithm that restricts m=2, hence achieving 98.8% probability. The proposed method, as a first step, rejects composite numbers that do not satisfy the equation, $n=6k{\pm}1$, $n_1{\neq}5$. Next, it determines prime by computing $2^{2^{s-1}d}{\equiv}{\beta}_{s-1}$(mod n) and $2^d{\equiv}{\beta}_0$(mod n). In the third step, it tests ${\beta}_r{\equiv}-1$ in the range of $1{\leq}r{\leq}s-2$ for ${\beta}_0$ > 1. In the case of ${\beta}_0$ = 1, it retests m=3,5,7,11,13,17 sequentially. When applied to n=[101,1000], the proposed algorithm determined 96.55% of prime in the initial stage. The remaining 3% was performed for ${\beta}_0$ >1 and 0.55% for ${\beta}_0$ = 1.

Ammoniacal Leaching for Recovery of Valuable Metals from Spent Lithium-ion Battery Materials (폐리튬이온전지로부터 유가금속을 회수하기 위한 암모니아 침출법)

  • Ku, Heesuk;Jung, Yeojin;Kang, Ga-hee;Kim, Songlee;Kim, Sookyung;Yang, Donghyo;Rhee, Kangin;Sohn, Jeongsoo;Kwon, Kyungjung
    • Resources Recycling
    • /
    • v.24 no.3
    • /
    • pp.44-50
    • /
    • 2015
  • Recycling technologies would be required in consideration of increasing demand in lithium ion batteries (LIBs). In this study, the leaching behavior of Ni, Co and Mn is investigated with ammoniacal medium for spent cathode active materials, which are separated from a commercial LIB pack in hybrid electric vehicles. The leaching behavior of each metal is analyzed in the presence of reducing agent and pH buffering agent. The existence of reducing agent is necessary to increase the leaching efficiency of Ni and Co. The leaching of Mn is insignificant even with the existence of reducing agent in contrast to Ni and Co. The most conspicuous difference between acid and ammoniacal leaching would be the selective leaching behavior between Ni/Co and Mn. The ammoniacal leaching can reduce the cost of basic reagent that makes the pH of leachate higher for the precipitation of leached metals in the acid leaching.

A Posterior Preference Articulation Method to the Weighted Mean Squared Error Minimization Approach in Multi-Response Surface Optimization (다중반응표면 최적화에서 가중평균제곱오차 최소화법을 위한 선호도사후제시법)

  • Jeong, In-Jun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.10
    • /
    • pp.7061-7070
    • /
    • 2015
  • Multi-Response Surface Optimization aims at finding the optimal setting of input variables considering multiple responses simultaneously. The Weighted Mean Squared Error (WMSE) minimization approach, which imposes a different weight on the two components of mean squared error, squared bias and variance, first obtains WMSE for each response and then minimizes all the WMSEs at once. Most of the methods proposed for the WMSE minimization approach to date are classified into the prior preference articulation approach, which requires that a decision maker (DM) provides his/her preference information a priori. However, it is quite difficult for the DM to provide such information in advance, because he/she cannot experience the relationships or conflicts among the responses. To overcome this limitation, this paper proposes a posterior preference articulation method to the WMSE minimization approach. The proposed method first generates all (or most) of the nondominated solutions without the DM's preference information. Then, the DM selects the best one from the set of nondominated solutions a posteriori. Its advantage is that it provides an opportunity for the DM to understand the tradeoffs in the entire set of nondominated solutions and effectively obtains the most preferred solution suitable for his/her preference structure.

Clinical Trials and Accuracy of Diagnostic Tests (진단법의 임상시험연구와 진단정확도)

  • Lee, You-Kyoung;Lee, Sang-Moo
    • Journal of Genetic Medicine
    • /
    • v.8 no.1
    • /
    • pp.28-34
    • /
    • 2011
  • Most clinicians understand clinical trials as the evaluation process for new medicine before their use. However, clinical trials can also be applied to laboratory diagnostic tests (LDTs) to verify diagnostic accuracy and efficacy before their clinical laboratory implementation for patients. The clinical trial of LDT has two distinctive characteristics that are different from the case of pharmaceuticals and thus worth special consideration. One of them is the level of evidence. The well-designed randomized controlled trials (RCTs) are known to provide the best evidence to prove the clinical efficacy of any pharmaceutical products. However, RCTs lose practicality when applied to LDTs due to various issues including ethical complications. For this reason, comparative study format is considered more feasible approach for LDTs. In addition pharmaceuticals and LDTs are different in that the user's intervention is not required for the former but critical to the latter. Moreover, in the case of pharmaceuticals, end-products are produced by manufacturers before being used by clinicians. However, in LDTs, once reagents and instruments are provided by manufacturers, they are first utilized by clinical laboratories to produce test results in order for clinicians to use them later. In other words, when it comes to LDTs, clinical laboratories play the role of manufacturers, providing reliable test results with improved quality assurance. Considering the distinctive characteristics of LDTs, we would like to offer detailed suggestions to successfully perform clinical trials in LDTs, which include analytical performance measures, clinical test performance measures, diagnostic test accuracy measures, clinical effectiveness measures, and post-implementation surveillance.