• Title/Summary/Keyword: speed correction

Search Result 437, Processing Time 0.027 seconds

Analysis on the Correction Factor of Emission Factors and Verification for Fuel Consumption Differences by Road Types and Time Using Real Driving Data (실 주행 자료를 이용한 도로유형·시간대별 연료소모량 차이 검증 및 배출계수 보정 지표 분석)

  • LEE, Kyu Jin;CHOI, Keechoo
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.5
    • /
    • pp.449-460
    • /
    • 2015
  • The reliability of air quality evaluation results for green transportation could be improved by applying correct emission factors. Unlike previous studies, which estimated emission factors that focused on vehicles in laboratory experiments, this study investigates emission factors according to road types and time using real driving data. The real driving data was collected using a Portable Activity Monitoring System (PAMS) according to road types and time, which it compared and analyzed fuel consumption from collected data. The result of the study shows that fuel consumption on national highway is 17.33% higher than the fuel consumption on expressway. In addition, the average fuel consumption of peak time is 4.7% higher than that of non-peak time for 22.5km/h. The difference in fuel consumption for road types and time is verified using ANOCOVA and MANOVA. As a result, the hypothesis of this study - that fuel consumption differs according to road types and time, even if the travel speed is the same - has proved valid. It also suggests correction factor of emission factors by using the difference in fuel consumption. It is highly expected that this study can improve the reliability of emissions from mobile pollution sources.

Development of the Precision Image Processing System for CAS-500 (국토관측위성용 정밀영상생성시스템 개발)

  • Park, Hyeongjun;Son, Jong-Hwan;Jung, Hyung-Sup;Kweon, Ki-Eok;Lee, Kye-Dong;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.881-891
    • /
    • 2020
  • Recently, the Ministry of Land, Infrastructure and Transport and the Ministry of Science and ICT are developing the Land Observation Satellite (CAS-500) to meet increased demand for high-resolution satellite images. Expected image products of CAS-500 includes precision orthoimage, Digital Surface Model (DSM), change detection map, etc. The quality of these products is determined based on the geometric accuracy of satellite images. Therefore, it is important to make precision geometric corrections of CAS-500 images to produce high-quality products. Geometric correction requires the Ground Control Point (GCP), which is usually extracted manually using orthoimages and digital map. This requires a lot of time to acquire GCPs. Therefore, it is necessary to automatically extract GCPs and reduce the time required for GCP extraction and orthoimage generation. To this end, the Precision Image Processing (PIP) System was developed for CAS-500 images to minimize user intervention in GCP extraction. This paper explains the products, processing steps and the function modules and Database of the PIP System. The performance of the System in terms of processing speed, is also presented. It is expected that through the developed System, precise orthoimages can be generated from all CAS-500 images over the Korean peninsula promptly. As future studies, we need to extend the System to handle automated orthoimage generation for overseas regions.

Digital Processing and Acoustic Backscattering Characteristics on the Seafloor Image by Side Scan Sonar (Side Scan Sonar 탐사자료의 영상처리와 해저면 Backscattering 음향특성)

  • 김성렬;유홍룡
    • 한국해양학회지
    • /
    • v.22 no.3
    • /
    • pp.143-152
    • /
    • 1987
  • The digital data were obtained using Kennedy 9000 magnetic tape deck which was connected to the SMS960 side scan sonar during the field operations. The data of three consecutive survey tracks near Seongsan-po, Cheju were used for the development of this study. The softwares were mainly written in Fortran-77 using VAX 11/780 MINI-COMPUTER (CPU Memory; 4MB). The established mapping system consists of the pretreatment and the digital processing of seafloor image data. The pretreatment was necessary because the raw digital data format of the field magnetic tapes was not compatible to the VAX system. Therefore the raw data were read by the personal computer using the Assembler language and the data format was converted to IBM compatible, and next data were communicated to the VAX system. The digital processing includes geometrical correction for slant range, statistical analysis and cartography of the seafloor image. The sound speed in the water column was assumed 1,500 m/sec for the slant range correction and the moving average method was used for the signal trace smoothing. Histograms and cumulative curves were established for the statistical analysis, that was purposed to classify the backscattering strength from the sea-bottom. The seafloor image was displayed on the color screen of the TEKTRONIX 4113B terminal. According to the brief interpretation of the result image map, rocky and sedimentary bottoms were very well discriminated. Also it was shown that the backscattered acoustic pressurecorrelateswith the grain size and sorting of surface sediments.

  • PDF

THE LUMINOSITY-LINEWIDTH RELATION AS A PROBE OF THE EVOLUTION OF FIELD GALAXIES

  • GUHATHAKURTA PURAGRA;ING KRISTINE;RIX HANS-WALTER;COLLESS MATTHEW;WILLIAMS TED
    • Journal of The Korean Astronomical Society
    • /
    • v.29 no.spc1
    • /
    • pp.63-64
    • /
    • 1996
  • The nature of distant faint blue field galaxies remains a mystery, despite the fact that much attention has been devoted to this subject in the last decade. Galaxy counts, particularly those in the optical and near ultraviolet bandpasses, have been demonstrated to be well in excess of those expected in the 'no-evolution' scenario. This has usually been taken to imply that galaxies were brighter in the past, presumably due to a higher rate of star formation. More recently, redshift surveys of galaxies as faint as B$\~$24 have shown that the mean redshift of faint blue galaxies is lower than that predicted by standard evolutionary models (de-signed to fit the galaxy counts). The galaxy number count data and redshift data suggest that evolutionary effects are most prominent at the faint end of the galaxy luminosity function. While these data constrain the form of evolution of the overall luminosity function, they do not constrain evolution in individual galaxies. We are carrying out a series of observations as part of a long-term program aimed at a better understanding of the nature and amount of luminosity evolution in individual galaxies. Our study uses the luminosity-linewidth relation (Tully-Fisher relation) for disk galaxies as a tool to study luminosity evolution. Several studies of a related nature are being carried out by other groups. A specific experiment to test a 'no-evolution' hypothesis is presented here. We have used the AUTOFIB multifibre spectro-graph on the 4-metre Anglo-Australian Telescope (AAT) and the Rutgers Fabry-Perot imager on the Cerro Tolalo lnteramerican Observatory (CTIO) 4-metre tele-scope to measure the internal kinematics of a representative sample of faint blue field galaxies in the red-shift range z = 0.15-0.4. The emission line profiles of [OII] and [OIII] in a typical sample galaxy are significantly broader than the instrumental resolution (100-120 km $s^{-l}$), and it is possible to make a reliable de-termination of the linewidth. Detailed and realistic simulations based on the properties of nearby, low-luminosity spirals are used to convert the measured linewidth into an estimate of the characteristic rotation speed, making statistical corrections for the effects of inclination, non-uniform distribution of ionized gas, rotation curve shape, finite fibre aperture, etc.. The (corrected) mean characteristic rotation speed for our distant galaxy sample is compared to the mean rotation speed of local galaxies of comparable blue luminosity and colour. The typical galaxy in our distant sample has a B-band luminosity of about 0.25 L$\ast$ and a colour that corresponds to the Sb-Sd/Im range of Hub-ble types. Details of the AUTOFIB fibre spectroscopic study are described by Rix et al. (1996). Follow-up deep near infrared imaging with the 10-metre Keck tele-scope+ NIRC combination and high angular resolution imaging with the Hubble Space Telescope's WFPC2 are being used to determine the structural and orientation parameters of galaxies on an individual basis. This information is being combined with the spatially resolved CTIO Fabry-Perot data to study the internal kinematics of distant galaxies (Ing et al. 1996). The two main questions addressed by these (preliminary studies) are: 1. Do galaxies of a given luminosity and colour have the same characteristic rotation speed in the distant and local Universe? The distant galaxies in our AUTOFIB sample have a mean characteristic rotation speed of $\~$70 km $s^{-l}$ after correction for measurement bias (Fig. 1); this is inconsistent with the characteristic rotation speed of local galaxies of comparable photometric proper-ties (105 km $s^{-l}$) at the > $99\%$ significance level (Fig. 2). A straightforward explanation for this discrepancy is that faint blue galaxies were about 1-1.5 mag brighter (in the B band) at z $\~$ 0.25 than their present-day counterparts. 2. What is the nature of the internal kinematics of faint field galaxies? The linewidths of these faint galaxies appear to be dominated by the global disk rotation. The larger galaxies in our sample are about 2"-.5" in diameter so one can get direct insight into the nature of their internal velocity field from the $\~$ I" seeing CTIO Fabry-Perot data. A montage of Fabry-Perot data is shown in Fig. 3. The linewidths are too large (by. $5\sigma$) to be caused by turbulence in giant HII regions.

  • PDF

Low Power Turbo Decoder Design Techniques Using Two Stopping Criteria (이중 정지 기준을 사용한 저 전력 터보 디코더 설계 기술)

  • 임호영;강원경;신현철;김경호
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.9
    • /
    • pp.39-48
    • /
    • 2004
  • Turbo codes, whose performance in bit error rate is close to the Shannon limit, have been adopted as a part of standard for the third-generation high-speed wireless data services. Iterative Turbo decoding results in decoding delay and high power consumption. As wireless communication systems can only use limited power supply, low power design techniques are essential for mobile device implementation. This paper proposes new effective criteria for stopping the iteration process in turbo decoding to reduce power consumption. By setting two stopping criteria, decodable threshold and undecodable threshold, we can effectively reduce the number of decoding iterations with only negligible error-correcting performance degradation. Simulation results show that the number of unsuccessful error-correction can be reduced by 89% and the number of decoding iterations can be reduced by 29% on the average among 12500 simulations when compared with those of an existing typical method.

A Fast Sorting Strategy Based on a Two-way Merge Sort for Balancing the Capacitor Voltages in Modular Multilevel Converters

  • Zhao, Fangzhou;Xiao, Guochun;Liu, Min;Yang, Daoshu
    • Journal of Power Electronics
    • /
    • v.17 no.2
    • /
    • pp.346-357
    • /
    • 2017
  • The Modular Multilevel Converter (MMC) is particularly attractive for medium and high power applications such as High-Voltage Direct Current (HVDC) systems. In order to reach a high voltage, the number of cascaded submodules (SMs) is generally very large. Thus, in the applications with hundreds or even thousands of SMs such as MMC-HVDCs, the sorting algorithm of the conventional voltage balancing strategy is extremely slow. This complicates the controller design and increases the hardware cost tremendously. This paper presents a Two-Way Merge Sort (TWMS) strategy based on the prediction of the capacitor voltages under ideal conditions. It also proposes an innovative Insertion Sort Correction for the TWMS (ISC-TWMS) to solve issues in practical engineering under non-ideal conditions. The proposed sorting methods are combined with the features of the MMC-HVDC control strategy, which significantly accelerates the sorting process and reduces the implementation efforts. In comparison with the commonly used quicksort algorithm, it saves at least two-thirds of the sorting execution time in one arm with 100 SMs, and saves more with a higher number of SMs. A 501-level MMC-HVDC simulation model in PSCAD/EMTDC has been built to verify the validity of the proposed strategies. The fast speed and high efficiency of the algorithms are demonstrated by experiments with a DSP controller (TMS320F28335).

Precise Positioning of Farm Vehicle Using Plural GPS Receivers - Error Estimation Simulation and Positioning Fixed Point - (다중 GPS 수신기에 의한 농업용 차량의 정밀 위치 계측(I) - 오차추정 시뮬레이션 및 고정위치계측 -)

  • Kim, Sang-Cheol;Cho, Sung-In;Lee, Seung-Gi;Lee, W.Y.;Hong, Young-Gi;Kim, Gook-Hwan;Cho, Hee-Je;Gang, Ghi-Won
    • Journal of Biosystems Engineering
    • /
    • v.36 no.2
    • /
    • pp.116-121
    • /
    • 2011
  • This study was conducted to develop a robust navigator which could be in positioning for precision farming through developing a plural GPS receiver with 4 sets of GPS antenna. In order to improve positioning accuracy by integrating GPS signals received simultaneously, the algorithm for processing plural GPS signal effectively was designed. Performance of the algorithm was tested using a simulation program and a fixed point on WGS 84 coordinates. Results of this study are aummarized as followings. 1. 4 sets of lower grade GPS receiver and signals were integrated by kalman filter algorithm and geometric algorithm to increase positioning accuracy of the data. 2. Prototype was composed of 4 sets of GPS receiver and INS components. All Star which manufactured by CMC, gyro compass made by KVH, ground speed sensor and integration S/W based on RTOS(Real Time Operating System)were used. 3. Integration algorithm was simulated by developed program which could generate random position error less then 10 m and tested with the prototype at a fixed position. 4. When navigation data was integrated by geometrical correction and kalman filter algorithm, estimated positioning erros were less then 0.6 m and 1.0 m respectively in simulation and fixed position tests.

A Study on the Development Web Services Component Based Service Oriented Architecture (SOA 기반의 웹 서비스 컴포넌트 개발에 관한 연구)

  • Park Dong-Sik;Shin Ho-Jun;Kim Haeng-Kon
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1496-1504
    • /
    • 2004
  • Web service should be to connect business between enterprise through the Internet, promotion of construction speed and decrease of development expense of service construction are possible. Also, unification with other domain is possible easily, and update or correction is easy by offering reusability and replaceability through component based development. In this paper, We suggest development process to build architecture and this to integrate consisted component efficiently to develop web service that is embodied in supplier side on service oriented architecture(SOA). The suggest architecture to integrate component that is consisted for this efficiently, and describes development process. So that component develops web service to base structure of web service because do command stratification logically function in each hierarchy define, and presents architecture based on logical hierarchy. The web services consist of Facade and Backside component; The Facade component have web service functions. We describe process that develop to Facade component and present mailing web services as case study. It can be decrease production cost and development time. The web service based on component will improve reliability for reuseability and replaceability.

  • PDF

A performance analysis of layered LDPC decoder for mobile WiMAX system (모바일 WiMAX용 layered LDPC 복호기의 성능분석)

  • Kim, Eun-Suk;Kim, Hae-Ju;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.921-929
    • /
    • 2011
  • This paper describes an analysis of the decoding performance and decoding convergence speed of layered LDPC(low-density parity-check) decoder for mobile WiMAX system, and the optimal design conditions for hardware implementation are searched. A fixed-point model of LDPC decoder, which is based on the min-sum algorithm and layered decoding scheme, is implemented and simulated using Matlab model. Through fixed-point simulations for the block lengths of 576, 1440, 2304 bits and the code rates of 1/2, 2/3A, 2/3B, 3/4A, 3/4B, 5/6 specified in the IEEE 802.16e standard, the effect of internal bit-width, block length and code rate on the decoding performance are analyzed. Simulation results show that fixed-point bit-width larger than 8 bits with integer part of 5 bits should be used for acceptable decoding performance.

A Study on Minimization Method of Reading Error Range and Implementation of Postal 4-state Bar Code Reader with Raster Beam (Raster Beam에 의한 우편용 4-state 바코드 판독기 구현 및 판독오차 범위의 최소화 방법에 관한 연구)

  • Park, Moon-Sung;Song, Jae-Gwan;Nam, Yun-Seok;Kim, Hye-Kyu;Jung, Hoe-Kyung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2149-2160
    • /
    • 2000
  • Recently many efforts on the development of automatic processing system for delivery sequence sorting have been performed in ETRI, which requires the use of postal4-state bar code system to encode delivery points. The 4-state bar code called postal 4-state barcode for high speed processing that has been specifically designed for information processing of logistics and automatic processing of he mail items. The Information of 4-state bar code indicates mail data such as post code, delivery sequence number, error correction code worked, customer information, and a unique ID. This appear addresses the issue on he reduction of reading error in postal 4-state raster beam based bar code reader. The raster beam scanning features are the unequally distributed number of spots per each unit, which cause reading errors. We propose a method for reducing the bar code reading error by adjusting measured values of bar code width to its average value over each interval. The test results show that the above method reduces the average reading error rate approximately by 99.88%.

  • PDF