Does this looping stop when the interrupt occurs and resume when the interrupt callback function finishes execution?
Yep, that's what an interrupt does!
This is exactly the purpose of interrupt service routines.
Since I see several answers saying the above, I feel it might be interesting to explore the Devil's Advocate position.
Now, this point probably involves a bit more implementation detail than is desired (or even understandable?) here, and the fact that an architecture hasn't been defined suggests I should keep it general, which helps even less in concreteness.
If that is the case, then don't mind this answer; and, in any case, do do the usual [interrupt returns quickly], especially if anyone else ever needs to read your code!
So, that said:
The above is merely the most common use case. To be clear, it's a very common use case; common enough that it's also quite a strong assumption within most (all? ...all that are worth using? ) compilers / toolchains.
But it doesn't need to be the case.
In typical modern-ish architectures, what happens during an interrupt is: the CPU pushes the current address onto the stack (so it can be returned to later)*, jumps to some address to handle the interrupt (or fetches a corresponding address from memory and jumps to the specified location), and simply continues executing from there as if nothing else has changed. It's entirely up to the ISR (interrupt service routine) to handle whatever values the CPU has on hand -- in context -- in registers -- and to return to the place interrupted from, as if nothing at all happened (registers restored perfectly).
*Many architectures store this in a temporary register, but the most common use-case is to immediately push it onto the stack in software. Well, "most common" may be pushing it, as there are cases where registers don't need to be saved (the ISR is simple enough not to bother pushing things to stack). Whether this happens as a matter of hand optimization or compiler generated output, also varies widely.
Normally, an ISR pushes those registers onto the stack (or whatever substitute or equivalent the machine has), does its work, restores registers from stack, and returns to the interrupted location. The CPU simply jumps back and resumes execution.
(This also includes, by way of sufficient degree of generalization, machines with one interrupt, where software has to figure out where to go; in that case, the sub-ISR for each device is selected by some code common to all of them. Some registers may have been saved by the common code, which just shifts around who's responsible for what.)
We could very well write a sort of inverted interrupt system, where most of the time is spent in interrupt(s) instead, and the act of RETurning from the interrupt is equivalent to calling main() -- but not calling it from the top, or just anywhere, but precisely where it was interrupted from.
What use is that? It's certainly a powerful, and potentially fine-grained, method of control. Hm. Jumping into code "at random" will certainly not do, as registers need to be initialized to reasonable values for any given code passage to do something meaningful (let alone, its intended function).
At the very least, we'd need to record what state the CPU was in (registers and such, probably stack memory as well) upon interruption, and restore everything back exactly as it was found, on return. Anything more, is either more of the same, or we need to know a heck of a lot more about the active code, what it does, at what addresses it can be jumped into, and with what register values.
Well, even just the basics leave some interesting options. This mechanic could be the basis of a software debugger, where the CPU is interrupted as soon as it can be (i.e., just after executing the returned-to instruction), thus effectively single-stepping the CPU. Register values can be read out and modified at each step, or the return location modified, allowing stepping through the program in various ways. The ISR doesn't need to know about the interrupted code; it can assume the user knows what they're doing.
The interrupt could also be the elementary operation of a multitasking operating system, where execution paths may be divided between multiple locations (threads); context (registers and other data) is stored specific to each thread, and the interrupt system determines which one to fetch and return to based on a priority scheme.
Where does a thread come from? It's an object made (and executed) by the kernel, at will; we can construct entirely new context by initializing a thread object (including default registers and memory space) and jumping to it. As long as the code in that thread expects such a start, that's all that matters. Hence we can load and run application code at will. (At least, subject to memory limitations; some platforms have read-only code space, for example!)
With reflection (execution-time code analysis and generation), quite complex schemes of execution, function evaluation, object handling, and so on, can be constructed; this is very much state-of-the-art technology, usually by way of interpreted to JIT-compiled bytecode (to varying degrees in Java, JS, C#, etc.). (This is probably a poor description of things; I'm not very familiar with the advanced features, let alone internals, of these languages.)
Mind, just because you can, doesn't mean you should. There are more standardized ways to do this, perhaps using library functions, perhaps using more refined and well-understood RTOSs.
And, not all architectures can. Many (especially embedded platforms?) have an interrupt controller which monitors CPU activity / state / bus, and which can only be reset (cleared to generate another interrupt) when the CPU RETurns, that is, literally executes the interrupt-return instruction per se. This would make such interrupt service, if not impossible, at least more complicated. (For example, PUSHing a "fake" return address and RETing immediately, to clear the interrupt controller while continuing to execute the ISR; now other interrupts can overlap... but that includes the same interrupt again; one must be careful to avoid stack overflow!)