User:Sebbas/Reports/2021
< User:Sebbas | Reports
Reports 2021
January 04 - 08
- General:
- Working on project proposals for the new year.
- Bugs:
- Next week:
- There are still several bugs in the tracker that I would like to have fixed in 2.92. So more bug fixing next week.
January 11 - 15
- Bugs:
- Fix T84280: Mantaflow viscosity: Repeating emission from inflow causes initial emission to be twice as fast compared to proceeding emissions.
- Fix T84103: Smoke simulation dont show up after baking Noise
- Work on fix for T84649: Quick liquid causing crash on scale operation
- Looked into macOS bug T81169, but could not figure out the issue
- Next week:
- Same as last week: Work on more bugs in the tracker.
January 18 - 22
- General:
- Next week:
- Bug sprint week.
- Extra efforts to fix T81169.
January 25 - 29
- Bugs:
- Next week:
- Help with the "Overrides" project, get familiar with the project requirements.
- Formulate idea for a GSoC project (the idea is already there :)
February 01 - 05
- General:
- LibOverride: Only show relevant operators in outliner menu (bd973dbc44)
- LibOverride: Added log statements in liboverride operator functions (07f7483296)
- Miscellaneous fixes for fluids (critical for 2.92) (a563775649)
- Bugs:
- Next week:
- Look more into macOS library tasks
- Formulate idea for a GSoC project (carry over from last week)
February 08 - 12
- General:
- Worked dependency updates for macOS x86 and arm64 (rBL62545, rBL62549, rBL62558)
- Finalized idea / project description for a GSoC project on fluids (Machine Learning for Fluid Simulations)
- Bugs:
- Submitted D10360: Animation: Prevent keyframe manipulation in linked data
- Next week:
- Library Overrides project: Diffing code, start working on T82160
February 15 - 19
- General:
- Next week:
- Finalize macOS lib updates
February 22 - 26
- General:
- macOS (x86) library upgrades (rBL62576, rBL62578, rBL62579, rBL62584, fb7751f3e6, e00a87163c)
- Next week:
- Focus on library overrides (T82160)
March 01 - 05
- General:
- Next week:
- Catch up on physics module work
- Investigate why unit tests are failing on macOS arm64
March 08 - 12
- General:
- Fix for some duplicate users in credits (b66c22e1fb97)
- Enabled scale options for fluid particles in UI (b01e9ad4f0)
- Began catching up on open issues / reports in physics module
- Next week:
- Similar to last week, more module work.
March 15 - 19
- General:
- Libraries: macOS arm64 maintenance work (21236af80c, 970e246ccc)
- Bugs:
- Investigated T86053: 2.9x - Crash while baking particle and/or smoke simulations
- Next week:
- Similar to last week, more module work
March 22 - 26
- General:
- Preparations for new project on better real-time physics / fluids!
- Next week:
- Solve issue from T86053 without updating the blosc library
March 29 - 02
- Holiday week
April 05 - 09
- Bugs:
- Investigated T86053 and feasibility of blosc library upgrade for OpenVDB
- Next week:
- Bug fixing for 2.93
April 12 - 16
- General:
- macOS: Recompiled Python libs on 10.13 (rBL62615)
- CMake/deps: Remove CPP11 option for OpenImageIO (2cc3a89cf6)
- Next week:
- Bug fixing for 2.93
April 19 - 23
- General:
- No bug fixing this week (as planned before). Focused on real-time fluids improvements instead.
- In particular, I am exploring options on how to offload Mantaflow's (computationally expensive) simulation loops to the GPU.
- Next week:
- Bug sprint week for 2.93
April 26 - 30
- General:
- Mix of building a new workstation, reading CUDA developer docs and some potential fixes for 2.93 fluids
- GSoC proposal reviews
- Next week:
- Finalize + commit fluid bug fixes for 2.93
- Continue work fluid real-time optimizations
May 03 - 07
- General:
- Tests and fluid experiments with CUDA on new workstation
- Next week:
- Continue CUDA development
May 10 - 14
- General:
- Still working on 1st prototype for fast(er) fluids
- Current idea is to expand the Mantaflow preprocessor with an option for GPU offloading (OpenMP directives)
- Next week:
- Check up for bcon3 and continue with prototype
May 17 - 21
- General:
- Updated fluid Mantaflow source files for 2.93: The update includes a workaround for an issue with
BloscOpenVDB compression (crash in 2.92) (8dd43ac23e). Once OpenVDB updates their recommendedBloscversion, this fix can be reverted. - GPU fluids - Mantaflow side:
- Added an
-DOFFLOAD_OPENMPoption to CMake. By enabling it, all MantaflowKERNELfunctions carrying anoffloadargument will be run on the GPU. This is achieved by making the code preprocessor place OpenMP GPU directives before for-loops (i.e.pragma omp target teams ...). - With the option from above,
KERNELfunctions (that don't require memory transfer to/from the GPU) can already run on the GPU
- Added an
- GPU fluids - Blender side:
- Clang compilation: Blender's clang will need to be built twice and with GPU offloading capabilities. Started adjusting the deps build, but made no tangible progress yet (linker still complaining ...).
- Updated fluid Mantaflow source files for 2.93: The update includes a workaround for an issue with
- Next week:
- Continue (& ideally finish) work with OpenMP
map()directives (the transfer of Mantaflow memory blocks (e.g. grids & particle systems) between "CPU <-> GPU") - GSoC: Planning for the first weeks of coding (Soumya, "Simulation visualisation" project)
- Continue (& ideally finish) work with OpenMP
May 24 - 28
- General:
- The CPU <-> GPU Mantaflow data-block mapping development (using the OpenMP
map()directive) continued:- In the current state, grids (specified via Python) can be a mapped to the GPU, modified there in parallel and then read back.
- This makes it possible to run simple operations with Manta data structures on the GPU (e.g. multiply 2 grids cell-by-cell)
- Caveat: There is still a lot of manual work involved (e.g. grid attributes need to be mapped explicitly in the code)
- While the code is not ready yet, I can recommend anyone interested in OpenMP GPU offloading to watch some of OpenMP's conference videos. (e.g. "Best Practices for OpenMP on NVIDIA GPUs")
- macOS platform: Updated ffmpeg to version 4.4 (rBL62631)
- The CPU <-> GPU Mantaflow data-block mapping development (using the OpenMP
- Next week:
- Continue with OpenMP mapping work. Mantaflow grid and particle system attribute mapping needs to be fully automatic.
- Small evaluation: How big is the overhead that is generated from copying grid data to the GPU? How expensive does function call need to be for it to pay off?
May 31 - 04
- GPU fluid development:
- Continued with grid to GPU mapping development. Managed to get a first smoke plume simulation running where some of the grid loops ran on the GPU.
- Performance evaluation: The bigger the parallel loop over grid cells, the more gains can be seen with the GPU (obvious ...). The more interesting finding is that grid mapping from/to the GPU should be kept at a minimum. It's not a super expensive operation but should definitely not happen per function call (my 1st idea).
- My GPU simulation tests turned out to be slower because of excessive mapping calls (the bottleneck). The actual loops over cells where much faster though.
- Improvement for test: Frequently used simulation grids (i.e. density, velocity) can be mapped globally and only once when they are created (
pragma omp target enter/exit). This already work nicely via the Python API.
- Next week:
- In addition to GPU work, GSoC coordination and getting patches into Mantaflow standalone repository
June 07 - 11
- GPU fluid development:
- The first smoke simulation (running partly on the GPU) has been finished!
- Partly as in, only pressure is being solved on the GPU (the most expensive step in any simulation)
- Some rough numbers (400x400x100 domain simulated with AMD 3700X + Nvidia 1050Ti):
- CPU: 50 sec, GPU: 30 secs
- (Times for 1 pressure step with 600 iterations)
- As mentioned last week, bigger domains result in even greater speed-ups. A more thorough analysis with a more capable GPU (and memory!) should be done in the future. Would be nice to get someone from the community involved here (more infos on this will come soon)
- Next week:
- Code clean up, bring current diff in "committable" shape
- Port more functions to GPU (e.g. advection)
- Get a liquid simulation running on GPU (until now everything was just for smoke)
June 14 - 18
- GPU fluid development:
- Managed to get the GPU pressure solver working with liquid simulations. As I expected and hoped, the performance boost is similar to the one from smoke simulations (see report last week)
- Worked mainly on advection functions and the code preprocessor that generates their GPU version. Workflow is similar to when I ported the pressure solver, i.e. adjust single function, confirm that GPU kicks in, repeat.
- Working on a simplification for memory mapping to GPU. An ideal outcome would have no explicit calls on the Python side - right now I still rely on that.
- Got excited about new OpenMP 5.0 directives (loop, unified_shared_memory) - only to realize later that LLVM does not fully support OpenMP 5.0 yet ...
- General:
- Fluid bake optimization patch D11400 by @erik85 has landed in master (slightly adjusted in adefdbc9df)
- Next week:
- Continue adjusting functions to run on GPU (working my way down from computationally most to least expensive functions). The goal is to be able to run a full simulation step on the GPU.
- In order to run first GPU simulations in Blender itself, LLVM from the deps needs to be adjusted. Planning to look into that next week.
June 21 - 25
- GPU fluid development:
- Continued and finished porting all fluid
advectionfunctions to the GPU. - As expected, the speedup gained from this optimization is less noticeable than the one from the pressure functions (fluid advection usually takes up 10%-15% of a simulation step). But still, it's a speedup.
- Started with LLVM adjustments in deps builder - it's a WIP.
- Continued and finished porting all fluid
- Next week:
- Wrap up
pressureandadvectionGPU ports (i.e. bring in committable state). These two optimizations should become the central part of a v1 release. - More LLVM deps adjustments (i.e. build clang with GPU offload capabilities).
- Wrap up
June 28 - 02
- GPU fluid development:
- Spent most of my time running tests and comparing CPU / GPU performance.
- Created some slides to document the findings and review what's working / not working so far (will be published later here).
- Next week:
- Same goals as last week as didn't spend a lot of time on the code.
- So again, LLVM deps adjustments are high on my to-do list.
July 05 - 09
- GPU fluid development:
- Deps adjustments for GPU offloading: Worked on the builder in general, adding the same offloading options that I used when working on the Mantaflow repository.
- I am able to build with the new options, however, the GPU does not kick in yet. Right now, it's unclear why that is.
- General:
- The deps upgrades from D11748 fit in very well with my GPU tweaks. Started review for that.
- Next week:
- Get the GPU solver working inside Blender (can be prototype level). That's the highest priority for this week.
- Catch up on topics in the tracker, especially finish reviewing (D11748).
July 12 - 16
- GPU fluid development:
- To get OpenMP offloading running inside Blender, a library must not be linked statically (LLVM FAQ).
- Therefore Mantaflow linking had to be changed - it's now being linked as a shared library.
- This change finally made possible to run OpenMP offloading code in Blender (GPU kicks in).
- So far everything else works fine with fluid code in a shared library. I'll have to watch out for side effects though.
- Next week:
- GPU grid memory management: While running my 1st GPU tests in Blender I encountered some crashes.
- Will investigate what is wrong with GPU memory deallocation.
July 19 - 23
- GPU fluid development:
- Solved the memory deallocation problem from last week. In the end, it was the velocity grid that wasn't freed correctly and caused the hang up (advection step had silently swaped a pointer ...).
- The GPU solver code now works in- and outside of Blender which is good.
- Deps integration still needs to be a lot nicer but for now, it works as is.
- Next week:
- Cleanup GPU code for a v1 release.
- Help with studio related issues.
July 26 - 30
- General:
- Investigated a simulation issue for the studio (fluid slips through moving obstacle). Reproduced issue and found the underlying reason (liquid particles are advected and not stopped in obstacles).
- Issue can be resolved with simple boolean check (
stopInObstacle=TrueinadvectInGrid()). Currently checking for side-effects of this option. - Started with tests/experiments for new a fluid acceleration technique: a 2D simulation mode. It's skipping the Z dimension completely and is thus even faster than running a very narrow 3D simulation (e.g. 100x100x1 cells).
- Next week:
- More cleanups for GPU code + start committing to branch
August 02 - 06
- General
- There is a new branch for 2D fluid simulation development (fluid-mantaflow-2d)
- Right now, basic liquid and smoke/fire simulations are supported (4e84e076b2).
- macOS library updates for Blender 3.0 (rBL62670)
- There is a new branch for 2D fluid simulation development (fluid-mantaflow-2d)
- Next week:
- Didn't get to continue with GPU fluids last week, planning to do that this week.
- The 2D fluids branch will from now on be part of my weekly schedule.
August 09 - 13
- General:
- For the 2D simulations project, I focused on the liquid viscosity solver this week.
- Reducing the plugin from 3D to 2D went fine (it's basically omitting the Z dimension in the loops).
- However, I noticed that some scenes are much less stable (exploding fluid) than others. I've seen this in 3D before, with 2D it was much more visible.
- Some "matrix print debugging" revealed that this issue might be due to the lack of matrix preconditioning (currently the solver uses a "pure" conjugate gradient).
- Since the preconditioning was on my to-do list before, I figured it's a good time to pick up this task now. Nothing to show so far though.
- If it's ready in time, this optimization could go into master directly too.
- Next week:
- Same goals from week before.
August 16 - 20
- General:
- Continued with my efforts to add preconditiong to the viscosity solver (goal of this is: faster and more stable viscosity simulations, especially for 2D).
- It's taking up more time than expected, so I'm moving this to the backlog (post Blender 3.0 goal).
- Reading up on GSoC progress.
- Next week:
- Get Eevee and Cycles to work in 2D fluids branch. IMO, most import next feature for 2D branch.
- Get GPU fluids v1 out.
August 23 - 27
- General:
- Work on Cycles rendering for 2D fluids branch, get actual render to work
- Tested and went over D12002, prepared upstream integration
- Emails
- Next week:
- Push branch updates for
fluid-mantaflow-2dandfluid-mantaflow-gpu
- Push branch updates for
August 30 - 03
- General:
- Fluid: Parallelizations for Mantaflow functions (D12002) (5a02d0da7a)
- Next week:
- More work on 2D fluids.
September 06 - 10
- GPU fluid development:
- The new branch for GPU fluid simulation development is finally online (fluid-mantaflow-gpu)
- While the branch is still in an experimental state it would be nice to get interested people involved soon.
- Next week:
- Will be working half the week only, then taking some days off.
- General fluid support.
September 13 - 17
- Upcoming