I’m hoping that this project is just not generally representative of performance across the board with After Effects and that there’s a lot of specifics making it so slow.

I started a project on my old hackintosh, which used an i7 6700 CPU, with 32GB of DDR3 ram and an NVIDIA GTX970. It was always going to be a big ask of the machine because I’m working with a PSD file that is 5472x3468 as the main material used an seen in my composition. The idea is to have this PSD image appear to be made out of glass, evidenced by cracks appearing in a very specific pattern defined by a separate illustrator ‘cracks.ai’ file, and then chunks of the image falling off based on the shapes made by those lines in the illustrator file. Though the source file is high resolution the composition is HD (pillarboxed to fit the image in correct aspect ratio).

The best I was able to come up with to do this was use the shatter effect with a custom shatter map based on a version of the crack lines illustrator file where the shapes created by those lines are each filled in with a solid colour based on the allowable custom shatter map colours used by the shatter effect. In order to precisely time the falling of each of the 12 shards of the image I made 12 precompositions themselves containing many precompositions which basically make a version of the source psd where only one individual shard at a time is visible and then a shatter effect is applied to each such precomposition using a duplicate of the cracks shatter map with only one individual shape visible at any one time to ensure that there is only one piece of glass in each precomp with a shatter effect. This means a lot of stacking of precomps each making use of a high resolution PSD imported as layers, not as footage. I’m guessing this adds to the strain. It was definitely taxing my old computer’s modest specs. I also understand that the shatter effect is a very old effect and that it is 8bit.

Having switched the project over to my new 32GB ram, m2 max MBP I was really hoping for substantially faster render times even dreaming that perhaps it might even be real time. Not only wasn’t it real time, but it seems to be no better than my old machine and possibly even worse. I can’t really account for this, I’m not sure if this old 8bit effect works with GPU, but even if it doesn’t, the CPU should be vastly more capable I would have thought. Based on what I’ve described, is this to be expected?

  • @hanni
    link
    11 year ago

    Is it running natively or through Rosetta?