04-25-2014, 01:18 AM -
(04-24-2014, 08:42 PM)hlide Wrote: I'm not sure about what a caching interpreter is. But it may be a case of JIT (like the one in Xenia) which allows to reduce the overhead of decoding a contiguous set of instructions (usually called a basic block) by decoding them only once then execute them every time the address is hit by the instruction fetcher. Under certain architectures, you need to use a special instruction to flush icache so you can also discard a basic block from the instruction cache this way.
The simplest way is probably something like that:
Code:struct insn { void (*interpret_insn)(Context & context); };
...
std::unordered_map< u32, std::vector< insn > > insn_cache;
...
do
{
auto pc = context.pc;
auto bb = insn_cache.find(pc);
if (bb == insn_cache.end())
{
insn_cache[pc] = decode_insn(pc);
}
else
{
for (auto i : bb) i->interpret_insn(context);
}
}
while (context.no_external_event);
...
}
P.S.: decode_insn returns a vector of decoded insns which represents a basic block (there is no branch-like instruction in the block except for the last instruction)
It looks like all you're doing is caching the decoded instructions and storing them as function objects or something. At least, that's the only thing I could make out of it. I doubt this would bring any benefit over the amount of memory it uses.
A JIT, instead, would take a basic block or an entire procedure and recompile it to the target ISA, and then cache that code, so it can simply run that. (And indeed that is a great goal to have -- which rpcs3 will do in the future. )