Conversation
|
Thanks for handing that in! Timing is not great though. The next glTFast version will feature an animation API that refactored the base you've worked on. I'll see if I can merge your efforts. Why not use |
I tried this. Turns out the pooling has immense impact on the performance. While GC goes down big time (unsurprisingly as this is moved to malloc in native land), the time is back up 60-70% (tested on BrainStem.gltf). I think the best long-term solution is to combine array pooling in conjunction with NativeArrays. The question now becomes, is the degrade in garbage allocation worth the speed-up? Even the relatively small BrainStem model now has 200kB additional garbage (at a ~50% speed-up). I gravitate towards no, but I'm open to take a look at more profiling and discuss. |
|
I've re-based your work onto the current development branch. If you continue work, please do it from the updated branch: |
|
Another remark: After transition to NativeArray for Keyframes, every structure is thread safe and Burst compatible, so things could be sped up even further! |
|
Here's a PoC with (amateurishly) pooled NativeArrays in branch user/joverral/animationUtil_opts-native. It's comparably fast (sometimes faster) without the garbage allocations (150kB vs. 4kB for vec3 on BrainStem). What's unresolved is reliably allocating and freeing up those pools (across glTFs). Would be great if we could polish that. |
|
Yes, that is a nice change, the move to native. I'm not sure the best place to free up memory, that is a bit of a downside. You might see a decent gain by marking the static method as [BurstCompile] as well. I'd be leery of doing a full Burst Job, as there is a lot of time spent in startup/teardown on jobs, so unless the job was big enough, there wouldn't be any savings.
From: Andreas Atteneder ***@***.***>
Sent: Tuesday, April 14, 2026 3:42 AM
To: Unity-Technologies/com.unity.cloud.gltfast ***@***.***>
Cc: Josh Verrall ***@***.***>; Author ***@***.***>
Subject: Re: [Unity-Technologies/com.unity.cloud.gltfast] Optimize AnimationUtils (PR #44)
[https://avatars.githubusercontent.com/u/2593014?s=20&v=4]atteneder left a comment (Unity-Technologies/com.unity.cloud.gltfast#44)<#44 (comment)>
Here's a PoC with (amateurishly) pooled NativeArrays in branch user/joverral/animationUtil_opts-native<https://github.com/Unity-Technologies/com.unity.cloud.gltfast/tree/user/joverral/animationUtil_opts-native>. It's comparably fast (sometimes faster) without the garbage allocations (150kB vs. 4kB for vec3 on BrainStem).
What's unresolved is reliably allocating and freeing up those pools (across glTFs). Would be great if we could polish that.
-
Reply to this email directly, view it on GitHub<#44 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AGTJ33MXITMZWALZ7GB2PI34VYIWBAVCNFSM6AAAAACXBFJF22VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DENBTGI2TAMBZGU>.
You are receiving this because you authored the thread.Message ID: ***@***.******@***.***>>
|
AddKey is slow, as it does a sort every time. It is much faster to use the newer SetKeys method and pass in a Span of the array of keys we're using from a shared Array pool. In addition, we're seeing some GLTF models with hundreds of curves, so paying for string reformatting 900 times seems unnecessary.

Before:
After:
