The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The paper discusses optimization of hardware architecture from angle of compiling. The angle of compiling, which the paper refers to, is the located compiling technology. That is to say, the paper will be analyzed how to optimizing instruction set, register location and pipelining of hardware architecture from GCC compiling technology, such as peephole, diagram coloring and instruction scheduling.
A packet generator and network traffic capture system has been implemented on the NetFPGA. The NetFPGA is an open networking platform accelerator that enables rapid development of hardware-accelerated packet processing applications. The packet generator application allows Internet packets to be transmitted at line rate on up to four gigabit Ethernet ports simultaneously. Data transmitted is specified...
Different versions of high-level synthesis of binary numbers parallel multipliers were considered and their parameters were determined, such as: equipment costs and time delay. The possible scopes of their application were also estimated.
Usually, in high level hardware synthesis, all functional units of the same type have a fixed known ldquolengthrdquo (number of stages) and the scheduler mainly determines when each unit is activated. We focus on scheduling techniques for the high-level synthesis of pipelined functional units where the number of stages of these operations is a free parameter of the synthesis. This problem is motivated...
An increasing cache latency in next-generation processors incurs profound performance impacts in spite of advanced out-of-order execution techniques. One way to circumvent this cache latency problem is to predict load values at the onset of pipeline execution by exploiting either the load value locality or the address correlation of stores and loads. In this paper, we describe a new load value speculation...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.