@Anjok Will we see the baseline updated to use commands or have menu option in the GUI for specific stems? Like drums, bass, guitars, keys, etc? Yes lower end machines will take longer but it'll damn be worth it.
I can make a guide on how to install it on Google Colab, since it won't require coding there, otherwise this is the GitHub page and there are a few examples and comparisons https://github.com/facebookresearch/demucs
It's my go-to way for DIY stems. Downside is on some tracks it can ignore background vocals and leave them as synths, rather than part of the vocals track. But depends from song to song.
Anjok........is there anyway that the AI has been used on a speaking part of a song......
does it struggle with just plain talking as opposed to singing due to the amount of reverb that some singers use
Just speaking from my own experiences here, I have found it handles speaking exceptionally well in general.
I've heard some fantastic results on rap/hip-hop tracks etc. The flip side is that when the instrumentation is bare, every tiny little missed detail stands out. It's easier to scrub in spectral editing since there's not much sound on the spectrum to dig through ... but it all sticks out.
But really, since it's trained on voice, so much depends on how well it recognizes a specific type of voice sound, and how much it mistakes certain instrumentation for voice. That's why having multiple models could potentially prove very useful. Reverb kinda fits into that category as well. This model is trained on different music than the primary AI over in the other thread. The primary one can't handle reverb nearly as well as this one.