• Title/Summary/Keyword: Power Constraint

Search Result 500, Processing Time 0.03 seconds

Motion Estimation and Mode Decision Algorithm for Very Low-complexity H.264/AVC Video Encoder (초저복잡도 H.264 부호기의 움직임 추정 및 모드 결정 알고리즘)

  • Yoo Youngil;Kim Yong Tae;Lee Seung-Jun;Kang Dong Wook;Kim Ki-Doo
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.528-539
    • /
    • 2005
  • The H.264 has been adopted as the video codec for various multimedia services such as DMB and next-generation DVD because of its superior coding performance. However, the reference codec of the standard, the joint model (JM) contains quite a few algorithms which are too complex to be used for the resource-constraint embedded environment. This paper introduces very low-complexity H.264 encoding algorithm which is applicable for the embedded environment. The proposed algorithm was realized by restricting some coding tools on the basis that it should not cause too severe degradation of RD-performance and adding a few early termination and bypass conditions during the motion estimation and mode decision process. In case of encoding of 7.5fps QCIF sequence with 64kbpswith the proposed algorithm, the encoder yields worse PSNRs by 0.4 dB than the standard JM, but requires only $15\%$ of computational complexity and lowers the required memory and power consumption drastically. By porting the proposed H.264 codec into the PDA with Intel PXA255 Processor, we verified the feasibility of the H.264 based MMS(Multimedia Messaging Service) on PDA.

Thermal Analysis of 3D Multi-core Processors with Dynamic Frequency Scaling (동적 주파수 조절 기법을 적용한 3D 구조 멀티코어 프로세서의 온도 분석)

  • Zeng, Min;Park, Young-Jin;Lee, Byeong-Seok;Lee, Jeong-A;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.1-9
    • /
    • 2010
  • As the process technology scales down, an interconnection has became a major performance constraint for multi-core processors. Recently, in order to mitigate the performance bottleneck of the interconnection for multi-core processors, a 3D integration technique has drawn quite attention. The 3D integrated multi-core processor has advantage for reducing global wire length, resulting in a performance improvement. However, it causes serious thermal problems due to increased power density. For this reason, to design efficient 3D multi-core processors, thermal-aware design techniques should be considered. In this paper, we analyze the temperature on the 3D multi-core processors in function unit level through various experiments. We also present temperature characteristics by varying application features, cooling characteristics, and frequency levels on 3D multi-core processors. According to our experimental results, following two rules should be obeyed for thermal-aware 3D processor design. First, to optimize the thermal profile of cores, the core with higher cooling efficiency should be clocked at a higher frequency. Second, to lower the temperature of cores, a workload with higher thermal impact should be assigned to the core with higher cooling efficiency.

AC transport current loss analysis for a face-to-face stack of superconducting tapes

  • Yoo, Jaeun;Youm, Dojun;Oh, SangSoo
    • Progress in Superconductivity and Cryogenics
    • /
    • v.15 no.2
    • /
    • pp.34-38
    • /
    • 2013
  • AC Losses for face to face stacks of four identical coated conductors (CCs) were numerically calculated using the H-formulation combined with the E-J power law and the Kim model. The motive sample was the face to face stack of four 2 mm-wide CC tapes with 2 ${\mu}m$ thick superconducting layer of which the critical current density, $J_c$, was $2.16{\times}10^6A/cm^2$ on IBAD-MgO template, which was suggested for the mitigation of ac loss as a round shaped wire by Korea Electrotechnology Research Institute. For the calculation the cross section of the stack was simply modeled as vertically aligned 4 rectangles of superconducting (SC) layers with $E=E_o(J(x,y,t)/J_c(B))^n$ in x-y plane where $E_o$ was $10^{-6}$ V/cm, $J_c$(B) was the field dependence of current density and n was 21. The field dependence of the critical current of the sample measured in four-probe method was employed for $J_c$(B) in the equation. The model was implemented in the finite element method program by commercial software. The ac loss properties for the stacks were compared with those of single 4 cm-wide SC layers with the same critical current density or the same critical current. The constraint for the simulation was imposed in two different ways that the total current of the stack obtained by integrating J(x,y,t) over the cross sections was the same as that of the applied transport current: one is that one fourth of the external current was enforced to flow through each SC. In this case, the ac loss values for the stacks were lower than those of single wide SC layer. This mitigation of the loss is attributed to the reduction of the normal component of the magnetic field near the SC layers due to the strong expulsion of the magnetic field by the enforced transport current. On the contrary, for the other case of no such enforcement, the ac loss values were greater than those of single 4cm-wide SC layer and. In this case, the phase difference of the current flowing through the inner and the outer SC layers of the stack was observed as the transport current was increased, which was a cause of the abrupt increase of ac loss for higher transport current.

Joint Base Station and Relay Precoder Design with Relay Local Channel State Information for Multi-relay Aided Multi-user All-MIMO System (다중 릴레이, 다중 사용자 All-MIMO 시스템에서 릴레이 지역 채널 정보를 사용한 기지국 및 릴레이 전처리기 공동 설계 기법)

  • Cho, Young-Min;Jang, Seung-Jun;Kim, Dong-Ku
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6A
    • /
    • pp.405-419
    • /
    • 2012
  • In this paper, we propose a joint base station(BS) and relay precoders design in multi-relay aided multi-user all-multiple-input multiple-output(MIMO) system. The design criterion is to minimize user sum mean square error(SMSE) with relay sum power constraint(RSPC) where only local channel state information(CSI)s are available at relays. Local CSI at a relay is defined as the CSI of the channel which the relay itself accesses, out of among all the 1st hop and the 2nd hop channel in the system. With BS precoder structure which is concatenated with block diagonalization(BD) precoder, each relay can determine its own precoder using only local CSI. Proposed scheme is based on sequential iteration of two stages; stage 1 determines BS precoder and relay precoders jointly with SMSE duality, and stage 2 determines user receivers. Proposed scheme can be demonstrated theoretically to be always converge. We verify that proposed scheme outperforms simple amplify-and-forward(SAF), MMSE relay, and proposed schemes in [1] in terms of both SMSE and sum-rate performances.

A Comparison Analysis among Structural Equation Modeling (AMOS, LISREL and PLS) Using the Same Data (동일 데이터를 이용한 구조방정식 툴 간의 비교분석)

  • Nam, Soo-tai;Kim, Do-goan;Jin, Chan-yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.978-984
    • /
    • 2018
  • Structural equation modeling is pointing to statistical procedures that simultaneously perform path analysis and confirmatory factor analysis. Today, this statistical procedure is an essential tool for researchers in the social sciences. There are as AMOS, LISREL and PLS representative tools that can perform structural equation modeling analysis. AMOS provides a convenient graphical user interface for beginners to use. PLS has the advantage of not having a constraint on normal distribution as well as a graphical user interface. Therefore, we compared and analyzed the three most commonly used tools (applications) in social sciences. Based on structural equation modeling, confirmatory factor analysis was performed using the IBM AMOS Ver. 23, the LISREL 8.70 and the SmartPLS 2.0. The comparative results show that LISREL has the highest explanatory power of dependent variables than other analytical tools. The path coefficients and T-values presented by the analysis results showed similar results for all three analysis tools. This study suggests practical and theoretical implications based on the results.

A Construction of Pointer-based Model for Main Memory Database Systems (주기억장치 데이터베이스를 위한 포인터 기반 모델의 구축)

  • Bae, Myung-Nam;Choi, Wan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.4B
    • /
    • pp.323-338
    • /
    • 2003
  • The main memory database systems (MMDBMS) efficiently supports various database applications that require high performance since it employs main memory rather than disk as a primary storage. Recently, it has been increased needs that have the fast data processing as well as the efficient modeling of application requiring for a complicated structure, and conformity to applications that need the strict dta consistency. In MMDBMS, because all the data is located in the main memory, it can support the usable expression methods of data satisfying their needs without performance overhead. The method has the operation to manipulate the data and the constraint such as referential integrity in more detail. The data model consists of this methods is an essential component to decide the expression power of DBMS. In this paper, we discuss about various requests to provide the communication services and propose the data model that support it. The mainly discussed issues are 1) definition of the relationship between tables using the pointer, 2) navigation of the data using the relationship, 3) support of the referential integrity for pointer, 4) support of the uniform processing time for the join, 5) support of the object-oriented concepts, and 6) sharing of an index on multi-tables. We discuss the pointer-based data model that designed to include these issues to efficiently support complication environments.

XML Schema Evolution Approach Assuring the Automatic Propagation to XML Documents (XML 문서에 자동 전파하는 XML 스키마 변경 접근법)

  • Ra, Young-Gook
    • The KIPS Transactions:PartD
    • /
    • v.13D no.5 s.108
    • /
    • pp.641-650
    • /
    • 2006
  • XML has the characteristics of self-describing and uses DTD or XML schema in order to constraint its structure. Even though the XML schema is only at the stage of recommendation yet, it will be prevalently used because DTD is not itself XML and has the limitation on the expression power. The structure defined by the XML schema as well as the data of the XML documents can vary due to complex reasons. Those reasons are errors in the XML schema design, new requirements due to new applications, etc. Thus, we propose XML schema evolution operators that are extracted from the analysis of the XML schema updates. These schema evolution operators enable the XML schema updates that would have been impossible without supporting tools if there are a large number of XML documents complying the U schema. In addition, these operators includes the function of automatically finding the update place in the XML documents which are registered to the XSE system, and maintaining the XML documents valid to the XML schema rather than merely well-formed. This paper is the first attempt to update XML schemas of the XML documents and provides the comprehensive set of schema updating operations. Our work is necessary for the XML application development and maintenance in that it helps to update the structure of the XML documents as well as the data in the easy and precise manner.

Analysis of a CubeSat Magnetic Cleanliness for the Space Science Mission (우주과학임무를 위한 큐브위성 자기장 청결도 분석)

  • Jo, Hye Jeong;Jin, Ho;Park, Hyeonhu;Kim, Khan-Hyuk;Jang, Yunho;Jo, Woohyun
    • Journal of Space Technology and Applications
    • /
    • v.2 no.1
    • /
    • pp.41-51
    • /
    • 2022
  • CubeSat is a satellite platform that is widely used not only for earth observation but also for space exploration. CubeSat is also used in magnetic field investigation missions to observe space physics phenomena with various shape configurations of magnetometer instrument unit. In case of magnetic field measurement, the magnetometer instrument should be far away from the satellite body to minimize the magnetic disturbances from satellites. But the accommodation setting of the magnetometer instrument is limited due to the volume constraint of small satellites like a CubeSat. In this paper, we investigated that the magnetic field interference generated by the cube satellite was analyzed how much it can affect the reliability of magnetic field measurement. For this analysis, we used a reaction wheel and Torque rods which have relatively high-power consumption as major noise sources. The magnetic dipole moment of these parts was derived by the data sheet of the manufacturer. We have been confirmed that the effect of the residual moment of the magnetic torque located in the middle of the 3U cube satellite can reach 36,000 nT from the outermost end of the body of the CubeSat in a space without an external magnetic field. In the case of accurate magnetic field measurements of less than 1 nT, we found that the magnetometer should be at least 0.6 m away from the CubeSat body. We expect that this analysis method will be an important role of a magnetic cleanliness analysis when designing a CubeSat to carry out a magnetic field measurement.

The Effects of Intention Inferences on Scarcity Effect: Moderating Effect of Scarcity Type, Scarcity Depth (소비자의 기업의도 추론이 희소성 효과에 미치는 영향: 수량한정 유형과 폭의 조절효과)

  • Park, Jong-Chul;Na, June-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.195-215
    • /
    • 2008
  • The scarcity is pervasive aspect of human life and is a fundamental precondition of economic behavior of consumers. Also, the effect of scarcity message is a power social influence principle used by marketers to increase the subjective desirability of products. Because valuable objects are often scare, consumers tend to infer the scarce objects are valuable. Marketers often do base promotional appeals on the principle of scarcity to increase the subjective desirability their products among consumers. Specially, advertisers and retailers often promote their products using restrictions. These restriction act to constraint consumers' ability th take advantage of the promotion and can assume several forms. For example, some promotions are advertised as limited time offers, while others limit the quantity that can be bought at the deal price by employing the statements such as 'limit one per consumer,' 'limit 5 per customer,' 'limited products for special commemoration celebration,' Some retailers use statements extensively. A recent weekly flyer by a prominent retailer limited purchase quantities on 50% of the specials advertised on front page. When consumers saw these phrase, they often infer value from the product that has limited availability or is promoted as being scarce. But, the past researchers explored a direct relationship between the purchase quantity and time limit on deal purchase intention. They also don't explored that all restriction message are not created equal. Namely, we thought that different restrictions signal deal value in different ways or different mechanism. Consumers appear to perceive that time limits are used to attract consumers to the brand, while quantity limits are necessary to reduce stockpiling. This suggests other possible differences across restrictions. For example, quantity limits could imply product quality (i.e., this product at this price is so good that purchases must be limited). In contrast, purchase preconditions force the consumer to spend a certain amount to qualify for the deal, which suggests that inferences about the absolute quality of the promoted item would decline from purchase limits (highest quality) to time limits to purchase preconditions (lowest quality). This might be expected to be particularly true for unfamiliar brands. However, a critical but elusive issue in scarcity message research is the impacts of a inferred motives on the promoted scarcity message. The past researchers not explored possibility of inferred motives on the scarcity message context. Despite various type to the quantity limits message, they didn't separated scarcity message among the quantity limits. Therefore, we apply a stricter definition of scarcity message(i.e. quantity limits) and consider scarcity message type(general scarcity message vs. special scarcity message), scarcity depth(high vs. low). The purpose of this study is to examine the effect of the scarcity message on the consumer's purchase intension. Specifically, we investigate the effect of general versus special scarcity messages on the consumer's purchase intention using the level of the scarcity depth as moderators. In other words, we postulates that the scarcity message type and scarcity depth play an essential moderating role in the relationship between the inferred motives and purchase intention. In other worlds, different from the past studies, we examine the interplay between the perceived motives and scarcity type, and between the perceived motives and scarcity depth. Both of these constructs have been examined in isolation, but a key question is whether they interact to produce an effect in reaction to the scarcity message type or scarcity depth increase. The perceived motive Inference behind the scarcity message will have important impact on consumers' reactions to the degree of scarcity depth increase. In relation ti this general question, we investigate the following specific issues. First, does consumers' inferred motives weaken the positive relationship between the scarcity depth decrease and the consumers' purchase intention, and if so, how much does it attenuate this relationship? Second, we examine the interplay between the scarcity message type and the consumers' purchase intention in the context of the scarcity depth decrease. Third, we study whether scarcity message type and scarcity depth directly affect the consumers' purchase intention. For the answer of these questions, this research is composed of 2(intention inference: existence vs. nonexistence)${\times}2$(scarcity type: special vs. general)${\times}2$(scarcity depth: high vs. low) between subject designs. The results are summarized as follows. First, intention inference(inferred motive) is not significant on scarcity effect in case of special scarcity message. However, nonexistence of intention inference is more effective than existence of intention inference on purchase intention in case of general scarcity. Second, intention inference(inferred motive) is not significant on scarcity effect in case of low scarcity. However, nonexistence of intention inference is more effective than existence of intention inference on purchase intention in case of high scarcity. The results of this study will help managers to understand the relative importance among the type of the scarcity message and to make decisions in using their scarcity message. Finally, this article have several contribution. First, we have shown that restrictions server to activates a mental resource that is used to render a judgment regarding a promoted product. In the absence of other information, this resource appears to read to an inference of value. In the presence of other value related cue, however, either database(i.e., scarcity depth: high vs. low) or conceptual base(i.e.,, scarcity type special vs. general), the resource is used in conjunction with the other cues as a basis for judgment, leading to different effects across levels of these other value-related cues. Second, our results suggest that a restriction can affect consumer behavior through four possible routes: 1) the affective route, through making consumers feel irritated, 2) the cognitive making route, through making consumers infer motivation or attribution about promoted scarcity message, and 3) the economic route, through making the consumer lose an opportunity to stockpile at a low scarcity depth, or forcing him her to making additional purchases, lastly 4) informative route, through changing what consumer believe about the transaction. Third, as a note already, this results suggest that we should consider consumers' inferences of motives or attributions for the scarcity dept level and cognitive resources available in order to have a complete understanding the effects of quantity restriction message.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF