Intro to my project, then how to improve predictions for morning/evening choruses? #547
Unanswered
BradGilliam
asked this question in
Q&A
Replies: 1 comment
-
Hey @BradGilliam , regarding multiple species detections, you should read this thread: #541. For such a large dataset, you may want to consider software which runs BirdNET inference more rapidly than the Analyzer GUI, such as BirdNET Go, or Chirpity. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
Here is an intro to my situation: I have a ~75 TB dataset of acoustic recordings (primarily targeting birds) spanning 20 years (and counting), mainly from a wooded property in SE Ohio. I am neither a biologist/scientist, nor am I an ML expert, and nor am I a Python programmer, and I haven’t used Raven Pro (yet). My background is electrical/electronics and systems engineering, and I’ve attended two Recording Workshops held by the Macaulay Library (2005, 2007). My initial take is that the BirdNET Analyzer may be just the ticket for processing my large dataset.
I have installed the new Windows GUI to start with. The GUI seems to be providing decent predictions from my test tracks. CSO parameter settings at 0.9, 0.5, 0.0 seem to suffice, although I’ll continue to evaluate these before settling on a set and launching into the full-scale dataset analysis effort. Additionally, I have the GUI set to use a custom species list (the 2022 ABA checklist converted to the .txt format required by the Analyzer), and the output is exported via CSV files for which I have created VBA macros to organize and make clearer sense of the prediction results.
Key observation: For a given mid-May morning chorus track recorded in SE Ohio (~5-6 AM), I’m noticing that the Analyzer returns only one species prediction for a given 3-second interval, when in fact there are multiple species vocalizing simultaneously within close proximity to the mics during these 3-second intervals. It appears that the loudest bird is the one that gets predicted.
I’ve reviewed the README & Guide but I could still use some expert advice:
For some recording seasons where I implemented shotgun mics pointed in separate directions, I’ll need to use WaveLab Elements to separate out those L&R channels, creating ever more tracks within the dataset for analysis.
(Minor thought: has there been any contemplation regarding having the CLO host a hands-on BirdNET workshop, assuming you haven’t yet held one?)
Thanks in advance for any help you may provide!
Brad
Beta Was this translation helpful? Give feedback.
All reactions