CPU starvation on a gpu_1x_a10 instance

I have a long-running, CPU-bound, multi-threaded program running on a gpu_1x_a10 instance. Progress over time has stalled twice for an unknown reason, as shown in this graph: Imgur: The magic of the Internet. During these periods, I can see that CPU usage is near 0% via htop.

I assume that I’m running on a VM and sharing the CPU with other users. The two plateaus in the graph seem to indicate that my program is being starved of CPU for long stretches. (It’s theoretically possible that my program has some sort of bug, but it would be difficult to explain such a drastic drop-off and recovery.)

The same thing happened to me yesterday on another instance. Is there anything I can do to prevent this from occurring?