We made some cool new machine learning models for separating birdsong in soundscape recordings, and demonstrated how to use the separated audio to improve downstream classification. The separation model is available on github, where we've also got lots more examples. There's also a paper.
We just launched a machine learning competition for bird identification in soundscapes! This is a surprisingly difficult problem, which can ultimately help with ecosystem health monitoring (for example, if birds X, Y, Z are present, you can make inferences about their food and predators). I've been building models in this space for a couple of years, working with both the Cornell Lab of Ornithology and Cal Academy of Sciences in my spare time. I'm excited to see what sort of ideas we get from the larger community!