Appeal No. 2001-2610 Application No. 09/052,247 Fetching units first look to the cache for the next needed instruction in a set of instructions. If the instruction is not in the cache, this is termed a “cache miss” and the fetching unit must then retrieve the instruction from the system memory. As processor clock rates increase more rapidly than memory access times do, the latency penalties from a cache miss increase accordingly. Memory latency due to a cache miss may be reduced by prefetching an instruction cache line from a system memory device. The problem is that if an instruction that alters an instruction sequence path is executed, the prefetched cache line may not be used because an instruction may cause a jump to an instruction path that is outside the prefetched cache line. Prefetching a cache line that is not used leads to “cache pollution” and this reduces the effectiveness of prefetching. The present invention is directed to a prefetch mechanism that permits cache miss requests to be issued earlier without increasing cache pollution. Representative independent claim 1 is reproduced as follows: 1. In a data processor, a method of reducing cache miss penalties comprising the steps of: -2–Page: Previous 1 2 3 4 5 6 7 8 9 10 NextLast modified: November 3, 2007