First, a bit of context. Most of the animations in my app involve moving a piece on a game board from point A to point B or showing a piece attacking another piece. Each frame for a piece is composed of multiple sprites. With a few exceptions, the sprites for a given piece type are all contained within a single spritesheet - one spritesheet per piece type. I made sure the spritesheet itself uses POT dimensions although each frame has arbitrary dimensions.
Before running a move or attack sequence that involves one or more pieces on the board, I pre-compile the frames for the entire animation. Conceptually, this involves building an array of PIXI container objects for each piece. Each container in the array is the root of a tree of containers and sprites that provide a complete picture of the piece in a single frame. All tints, color-matrix filters, positions, and scaling are pre-applied. At that point, drawing a frame involves swapping out the root containers for each piece involved in the animation and then rendering the stage. The rest of the stage remains unchanged. Once the animation ends, all of the pre-compiled containers and sprites are destroyed and not reused in whole or in part in subsequent animations.
If you have more questions regarding the context, let me know. So while I ensured the stage manipulation during animation execution is very efficient, I want to make sure stage rendering is efficient as well. As things stand right now, rendering the stage can take 10 milliseconds on my laptop even if nothing about the stage has changed. Even though the game runs at 12fps allowing for 83ms of rendering time, I've suffered from intermittent skipped frames when playing the game on my phone. I haven't proven yet that the skipped frames on my phone are caused in part due to long stage rendering times, but I figured I would see if there is an opportunity for improvement there. Any best practices you can think of or short-comings with the approach I'm taking?