• Title/Summary/Keyword: Single allocation

Search Result 297, Processing Time 0.024 seconds

Frame Synchronization Algorithm based on Differential Correlation for Burst OFDM System (Burst OFDM 시스템을 위한 차동 상관 기반의 프레임 동기 알고리즘)

  • Um Jung-Sun;Do Joo-Hyun;Kim Min-Gu;Choi Hyung-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.1017-1026
    • /
    • 2005
  • In burst OFDM system, the frame synchronization should be performed first for the acquisition of received frame and the estimation of the correct FFT-window position. The conventional frame synchronization algorithms using design features of the preamble symbol, the repetition pattern of the OFDM symbol by pilot sub-carrier allocation rule and Cyclic Prefix(CP), has difficulty in the detection of precise frame timing because its correlation characteristics would increase and decrease gradually. Also, the algorithm based on the correlation between the reference signal and the received signal has performance degradation due to frequency offset. Therefore, we adopt a differential correlation method that is robust to frequency offset and has the clear peak value at the correct frame timing for frame synchronization. However, performance improvement is essential for differential correlation methods, since it usually shows multiple peak values due to the repetition pattern. In this paper, we propose an enhanced frame synchronization algorithm based on the differential correlation method that shows a clear single peak value by using differential correlation between samples of identical repeating pattern. We also introduce a normalization scheme which normalizes the result of differential correlation with signal power to reduce the frame timing error in the high speed mobile channel environments.

Genetic signature of strong recent positive selection at interleukin-32 gene in goat

  • Asif, Akhtar Rasool;Qadri, Sumayyah;Ijaz, Nabeel;Javed, Ruheena;Ansari, Abdur Rahman;Awais, Muhammd;Younus, Muhammad;Riaz, Hasan;Du, Xiaoyong
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.30 no.7
    • /
    • pp.912-919
    • /
    • 2017
  • Objective: Identification of the candidate genes that play key roles in phenotypic variations can provide new information about evolution and positive selection. Interleukin (IL)-32 is involved in many biological processes, however, its role for the immune response against various diseases in mammals is poorly understood. Therefore, the current investigation was performed for the better understanding of the molecular evolution and the positive selection of single nucleotide polymorphisms in IL-32 gene. Methods: By using fixation index ($F_{ST}$) based method, IL-32 (9375) gene was found to be outlier and under significant positive selection with the provisional combined allocation of mean heterozygosity and $F_{ST}$. Using nucleotide sequences of 11 mammalian species from National Center for Biotechnology Information database, the evolutionary selection of IL-32 gene was determined using Maximum likelihood model method, through four models (M1a, M2a, M7, and M8) in Codeml program of phylogenetic analysis by maximum liklihood. Results: IL-32 is detected under positive selection using the $F_{ST}$ simulations method. The phylogenetic tree revealed that goat IL-32 was in close resemblance with sheep IL-32. The coding nucleotide sequences were compared among 11 species and it was found that the goat IL-32 gene shared identity with sheep (96.54%), bison (91.97%), camel (58.39%), cat (56.59%), buffalo (56.50%), human (56.13%), dog (50.97%), horse (54.04%), and rabbit (53.41%) respectively. Conclusion: This study provides evidence for IL-32 gene as under significant positive selection in goat.

Effect of Pulsed Electromagnetic Field Treatment on Alleviation of Lumbar Myalgia; A Single Center, Randomized, Double-blind, Sham-controlled Pilot Trial Study

  • Park, Won-Hyung;Sun, Seung-Ho;Lee, Sun-Gu;Kang, Byoung-Kab;Lee, Jong-Soo;Hwang, Do-Guwn;Cha, Yun-Yeop
    • Journal of Magnetics
    • /
    • v.19 no.2
    • /
    • pp.161-169
    • /
    • 2014
  • The aim of this study is to investigate the efficacy of pulsed electromagnetic field (PEMF) on the alleviation of lumbar myalgia. This is a randomized, real-sham, double blind pilot study. 38 patients were divided into the PEMF group and the Sham group, each of which was composed of 19 patients (1 patient dropped out in the Sham group) of randomized allocation. The PEMF group was treated by using the PEMF device and the Sham group by using a sham device on the lumbar muscle and acupuncture points, three times a week for a total of two weeks. Evaluations of Visual Analogue Scale for bothersomeness (VASB), Visual Analogue Scale for pain intensity (VASP), Oswestry Disability Index (ODI), 36-Item Short Form Health Survey Instrument (SF-36), EuroQol-5Dimension (EQ-5D), Beck's Depression Inventory (BDI) and Roland-Morris Disability Questionnaire (RMDQ), etc. before and 1 week after treatment were carried out. The primary outcome measure was the VASB, measured 1 week after the end of the pulsed electromagnetic therapy. VASB scores for the PEMF group changed by $-2.06{\pm}2.12$ from the baseline, and that for the Sham group changed by $-0.52{\pm}0.82$ (p < 0.05). VASP scores for the PEMF group were reduced by $-2.10{\pm}2.12$ from the base line, and that for the Sham group was reduced by $-0.53{\pm}1.50$ (p < 0.05). PEMF group showed significant improvements in all VASB, VASP, ODI, SF-36, EQ-5D, BDI and RMDQ scores, while the Sham group showed significant improvements in all scores, except the VASP score. However, the VASB, VASP and RMDQ scores of the PEMF group were much lower than those of the Sham group. The two groups showed no significant difference in ODI, SF-36, EQ-5D and BDI. This study demonstrates the effectiveness of PEMF treatment for alleviating lumbar myalgia.

An Application of Artificial Intelligence System for Accuracy Improvement in Classification of Remotely Sensed Images (원격탐사 영상의 분류정확도 향상을 위한 인공지능형 시스템의 적용)

  • 양인태;한성만;박재국
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.21-31
    • /
    • 2002
  • This study applied each Neural Networks theory and Fuzzy Set theory to improve accuracy in remotely sensed images. Remotely sensed data have been used to map land cover. The accuracy is dependent on a range of factors related to the data set and methods used. Thus, the accuracy of maps derived from conventional supervised image classification techniques is a function of factors related to the training, allocation, and testing stages of the classification. Conventional image classification techniques assume that all the pixels within the image are pure. That is, that they represent an area of homogeneous cover of a single land-cover class. But, this assumption is often untenable with pixels of mixed land-cover composition abundant in an image. Mixed pixels are a major problem in land-cover mapping applications. For each pixel, the strengths of class membership derived in the classification may be related to its land-cover composition. Fuzzy classification techniques are the concept of a pixel having a degree of membership to all classes is fundamental to fuzzy-sets-based techniques. A major problem with the fuzzy-sets and probabilistic methods is that they are slow and computational demanding. For analyzing large data sets and rapid processing, alterative techniques are required. One particularly attractive approach is the use of artificial neural networks. These are non-parametric techniques which have been shown to generally be capable of classifying data as or more accurately than conventional classifiers. An artificial neural networks, once trained, may classify data extremely rapidly as the classification process may be reduced to the solution of a large number of extremely simple calculations which may be performed in parallel.

Knowledge-driven Dynamic Capability and Organizational Alignment: A Revelatory Historical Case

  • Kim, Gyeung-Min
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.33-56
    • /
    • 2010
  • The current business environment has been characterized as less munificent, highly uncertain and constantly evolving. In this environment, the company with dynamic capability is reported to be more successful than others in building competitive advantage. Dynamic capability focuses on the link between a dynamically changing environment, strategic agility, architectural reconfiguration, and value creation. Being characterized to be flexible and adaptive to market circumstance changes, an organization with dynamic capability is described to have high resource fluidity, which represents business process, resource allocation, human resource management and incentives that make business transformation faster and easier. Successful redeployment of the resources for dynamic adaptation requires organizational forms and reward systems to be well aligned with firm's technological infrastructures and business process. The alignment is considered to be an executive level commitment. Building dynamic capability is knowledge driven; relying on new knowledge to reconfigure firm's resources. Past studies established the link between the effective execution of a knowledge-focused strategy and relevant setting of architectural elements such as human resources, structure, process and information systems. They do not, however, describe in detail the underlying processes by which architectural elements are adjusted in coordinated manners to build knowledge-driven dynamic capability. In fact, understandings of these processes are one of the top issues in IT management. This study analyzed how a Korean corporation with a knowledge-focused strategy aligned its architectural elements to develop the dynamic capability and thus create value in the dynamically changing markets. When the Korean economy was in crisis, the company implemented a knowledge-focused strategy, restructured the organization's architecture by which human and knowledge resources are identified, structured, integrated and coordinated to identify and seize market opportunity. Specifically, the following architectural elements were reconfigured: human resource, decision rights, reward and evaluation systems, process, and IT infrastructure. As indicated by sales growth, the reconfiguration helped the company create value under an extremely turbulent environment. According to Ancona et al. (2001), depending on the types of lenses the organization uses, different types of architecture will emerge. For example, if an organization uses political lenses focusing on power, influence, and conflict. the architecture that leverage power and negotiate across multiple interest groups would emerge. Similarly, if an organization uses economic lenses focusing on the rational behavior of organizational actors making choices based on the costs and benefits of action, organizational architecture should be designed to motivate and provide incentives for the actors (Smith, 2001). Compared to this view, information processing perspectives consider architecture to be designed to maximize the capacity of information processing by the actors. Using knowledge lenses, the company studied in this research established architectural elements in a manner that allows the firm to effectively structure knowledge resources to form dynamic capability. This study is revelatory single case with a historic perspective. As a result of this study, a set of propositions and a framework are derived, which can be used for architectural alignment.

Risk and Responsibility in Korean Tobacco Litigation: Epidemiology and Causality in Late Modern Risk (한국 담배소송에서의 위험과 책임: 역학과 후기 근대적 인과)

  • Park, Jinyoung;Yi, Doogab
    • Journal of Science and Technology Studies
    • /
    • v.15 no.2
    • /
    • pp.229-262
    • /
    • 2015
  • Toxic tort cases have increased dramatically since the 1970s, as large technological systems, such as nuclear power plants and chemical factories, or mass-produced, high-tech products, had exposed citizens and consumers to dangerous substances. It was, however, difficult to establish causal connection between exposure and the alleged harms in many of the environmental, pollution, and product liability cases under the framework of tort law conception of causation and responsibility. Science and law was called upon to resolve such 'late modern' legal cases where true causes are hard to find, where no single explanatory factor is sufficient for explaining diseases like cancer. This article examines how plaintiffs in the Korean tobacco litigation mobilized such late modern tools in science and law, such as epidemiology and the allocation of the burden of proof, in the context of the global circulation of science and law. It further shows how a set of the scientific theories and legal arguments developed in order to cope with late modern risk played a central role in establishing a causation between smoking and cancer in 2011. This article suggests that STS scholars can fruitfully examine the interaction between science and law as a way to understand and engage with social and legal issues engendered by late modern risk.

Trip Assignment for Transport Card Based Seoul Metropolitan Subway Using Monte Carlo Method (Monte Carlo 기법을 이용한 교통카드기반 수도권 지하철 통행배정)

  • Meeyoung Lee;Doohee Nam
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.2
    • /
    • pp.64-79
    • /
    • 2023
  • This study reviewed the process of applying the Monte Carlo simulation technique to the traffic allocation problem of metropolitan subways. The analysis applied the assumption of a normal distribution in which the travel time information of the inter-station sample is the basis of the probit model. From this, the average and standard deviation are calculated by separating the traffic between stations. A plan was proposed to apply the simulation with the weights of the in-vehicle time of individual links and the walking and dispatch interval of transfer. Long-distance traffic with a low number of samples of 50 or fewer was evaluated as a way to analyze the characteristics of similar traffic. The research results were reviewed in two directions by applying them to the Seoul Metropolitan Subway Network. The travel time between single stations on the Seolleung-Seongsu route was verified by applying random sampling to the in-vehicle time and transfer time. The assumption of a normal distribution was accepted for sample sizes of more than 50 stations according to the inter-station traffic sample of the entire Seoul Metropolitan Subway. For long-distance traffic with samples numbering less than 50, the minimum distance between stations was 122Km. Therefore, it was judged that the sample deviation equality was achieved and the inter-station mean and standard deviation of the transport card data for stations at this distance could be applied.

Development of a Single Allocation Hub Network Design Model with Transportation Economies of Scale (수송 규모의 경제 효과를 고려한 단일 할당 허브 네트워크 설계 모형의 개발)

  • Kim, Dong Kyu;Park, Chang Ho;Lee, Jin Su
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6D
    • /
    • pp.917-926
    • /
    • 2006
  • Transportation Economies of scale are the essential properties of hub networks. One critical property of the hub network design problem is to quantify cost savings which stem from economies of scale, the costs of operating hub facilities and opportunity costs associated with delays stemming from consolidation of traffic flows. Due to the NP-complete property of the hub location problem, however, most previous researchers have focused on the development of heuristic algorithms for approximate solutions. The purpose of this paper is to develop a hub network design model considering transportation economies of scale from the consolidation of traffic flows. The model is designed to consider the uniqueness of hub networks and to determine several cost components. The heuristic algorithms for the developed model are suggested and the results of the model are compared with recently published studies using real data. Results of the analysis show that the proposed model reflects transportation economies of scale due to consolidation of flows. This study can form not only the theoretical basis of an effective and rational hub network design but contribute to the assessment of existing and planned logistics systems.

Problem Identification and Improvement Measures through Government24 App User Review Analysis: Insights through Topic Model (정부24 앱 사용자 리뷰 분석을 통한 문제 파악 및 개선방안: 토픽 모델을 통한 통찰)

  • MuMoungCho Han;Mijin Noh;YangSok Kim
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.27-35
    • /
    • 2023
  • Fourth Industrial Revolution and COVID-19 pandemic have boosted the use of Government 24 app for public service complaints in the era of non-face-to-face interactions. there has been a growing influx of complaints and improvement demands from users of public apps. Furthermore, systematic management of public apps is deemed necessary. The aim of this study is to analyze the grievances of Government 24 app users, understand the current dissatisfaction among citizens, and propose potential improvements. Data were collected from the Google Play Store from May 2, 2013, to June 30, 2023, comprising a total of 6,344 records. Among these, 1,199 records with a rating of 1 and at least one 'thumbs-up' were used for topic modeling analysis. The analysis revealed seven topics: 'Issues with certificate issuance,' 'Website functionality and UI problems,' 'User ID-related issues,' 'Update problems,' 'Government employee app management issues,' 'Budget wastage concerns ((It's not worth even a single star) or (It's a waste of taxpayers' money)),' and 'Password-related problems.' Furthermore, the overall trend of these topics showed an increase until 2021, a slight decrease in 2022, but a resurgence in 2023, underscoring the urgency of updates and management. We hope that the results of this study will contribute to the development and management of public apps that satisfy citizens in the future.

No-Touch vs. Conventional Radiofrequency Ablation Using Twin Internally Cooled Wet Electrodes for Small Hepatocellular Carcinomas: A Randomized Prospective Comparative Study

  • Yun Seok Suh;Jae Won Choi;Jeong Hee Yoon;Dong Ho Lee;Yoon Jun Kim;Jeong Hoon Lee;Su Jong Yu;Eun Ju Cho;Jung Hwan Yoon;Jeong Min Lee
    • Korean Journal of Radiology
    • /
    • v.22 no.12
    • /
    • pp.1974-1984
    • /
    • 2021
  • Objective: This study aimed to compare the efficacy between no-touch (NT) radiofrequency ablation (RFA) and conventional RFA using twin internally cooled wet (TICW) electrodes in the bipolar mode for the treatment of small hepatocellular carcinomas (HCC). Materials and Methods: In this single-center, two-arm, parallel-group, prospective randomized controlled study, we performed a 1:1 random allocation of eligible patients with HCCs to receive NT-RFA or conventional RFA between October 2016 and September 2018. The primary endpoint was the cumulative local tumor progression (LTP) rate after RFA. Secondary endpoints included technical conversion rates of NT-RFA, intrahepatic distance recurrence, extrahepatic metastasis, technical parameters, technical efficacy, and rates of complications. Cumulative LTP rates were analyzed using Kaplan-Meier analysis and the Cox proportional hazard regression model. Considering conversion cases from NT-RFA to conventional RFA, intention-to-treat and as-treated analyses were performed. Results: Enrolled patients were randomly assigned to the NT-RFA group (37 patients with 38 HCCs) or the conventional RFA group (36 patients with 38 HCCs). Among the NT-RFA group patients, conversion to conventional RFA occurred in four patients (10.8%, 4/37). According to intention-to-treat analysis, both 1- and 3-year cumulative LTP rates were 5.6%, in the NT-RFA group, and they were 11.8% and 21.3%, respectively, in the conventional RFA group (p = 0.073, log-rank). In the as-treated analysis, LTP rates at 1 year and 3 years were 0% and 0%, respectively, in the NT-RFA group sand 15.6% and 24.5%, respectively, in the conventional RFA group (p = 0.004, log-rank). In as-treated analysis using multivariable Cox regression analysis, RFA type was the only significant predictive factor for LTP (hazard ratio = 0.061 with conventional RFA as the reference, 95% confidence interval = 0.000-0.497; p = 0.004). There were no significant differences in the procedure characteristics between the two groups. No procedure-related deaths or major complications were observed. Conclusion: NT-RFA using TICW electrodes in bipolar mode demonstrated significantly lower cumulative LTP rates than conventional RFA for small HCCs, which warrants a larger study for further confirmation.