Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync roost counting changes with Canadian data #2

Open
wants to merge 23 commits into
base: tnmy/canadian_latest
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
bb371f7
minor updates to experiments_v3_linear_adaptor
wenlongzhao094 Aug 11, 2023
7c6fcc1
experiment_v4_maskrcnn
wenlongzhao094 Aug 11, 2023
002b976
count birds and bats, half done feature
wenlongzhao094 Aug 11, 2023
f9f6744
tools: README, post_hoc_counting
wenlongzhao094 Sep 3, 2023
3ad1457
counting animals in deployment
wenlongzhao094 Sep 23, 2023
a8a35ee
counting animals in deployment
wenlongzhao094 Sep 23, 2023
a7f5762
delete scans if no successfully rendered arrays
wenlongzhao094 Sep 24, 2023
0d7e644
add counting to run_day_station, yet to test
wenlongzhao094 Dec 31, 2023
4056fd5
pilot run
wenlongzhao094 Jan 19, 2024
604de62
debug counting in the system
wenlongzhao094 Jan 20, 2024
2d33620
vectorize
wenlongzhao094 Jan 23, 2024
5748db8
automate output file transfer
wenlongzhao094 Jan 26, 2024
9533887
count bats
wenlongzhao094 Feb 11, 2024
4f07081
config and launch
wenlongzhao094 Feb 28, 2024
424530d
no longer need publish_images.sh
wenlongzhao094 Mar 22, 2024
d492a76
Fix track_id in sweeps to ease the merge of per-sweep counts with scr…
wenlongzhao094 Apr 22, 2024
8e38e58
bug fix in post_hoc_counting/count_texas_bats_v3
wenlongzhao094 Apr 22, 2024
26b7252
post-hoc count bats w/ dualpol and dBZ filtering
wenlongzhao094 Jun 12, 2024
07cbc72
add counting with dualpol and reflectivity thresholds to deployment
wenlongzhao094 Jun 13, 2024
6051131
counting config
wenlongzhao094 Jun 14, 2024
2603ae2
debugging improved rsync
wenlongzhao094 Aug 11, 2024
e5410ad
us_sunrise_v3_debug in progress
wenlongzhao094 Aug 18, 2024
9cdfd7e
ready for us deployment, counting and auto result transfer
wenlongzhao094 Aug 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# Manual Additions
data
scratch
tools/post_hoc_counting/count_texas_bats_v3/texas_bats_v3*
tools/post_hoc_counting/unused_count_texas_bats_v3_screened/texas_bats_v3_screened*
tools/post_hoc_counting/unused_count_texas_bats_v3_long/texas_bats_v3_long*
tools/deployment_requests
development/custom_eval/collected_results

Expand Down
75 changes: 57 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,25 @@ Roost detection is based on [Detectron2](https://github.com/darkecology/detectro
- **tracking**
- **utils** contains various utils, scripts to postprocess roost tracks, and scripts to generate visualization
- **tools** is for system deployment
- **demo.py** downloads radar scans, renders arrays, detects and tracks
roosts in them, and postprocesses the results
- **launch_demo.py** is a modifiable template that submits **demo.sbatch** to servers with slurm management
- **demo.ipynb** is for interactively running the system
- **utc_to_local_time.py** takes in web ui files and append local time to each line
- **demo.py** downloads radar scans, renders arrays to be processed by models and some channels as images for
visualization, detects and tracks roosts in them, and postprocesses the results.
- **launch_demo.py** can call **sbatch demo.sh** multiple times to launch multiple jobs in parallel,
each for a station-year and on separate cpus. It is configured for birds.
- **launch_demo_bats.py** is configured for bats.
- **demo.sh** includes commands to run for each station-year, including running **demo.py** and
pushing outputs from the computing cluster to our doppler server.
- **gen_deploy_station_days_scripts.py** can create a **launch\*.py** file and corresponding **\*.sh** files,
when we want each slurm job to include multiple calls to **demo.py** (e.g., process several time periods at
a station within one slurm job).
- **publish_images.sh** sends images generated during system deployment to a server where we archive data.
This has been incorporated into **demo.sh**.
- (outdated) **demo.ipynb** is for interactively running the system and not actively maintained
- (customization) **launch_demo_tiff.py**, **demo_tiff.sh**, **demo_tiff.py** are customized given
rendered arrays as tiff files.
- (depreciated) **add_local_time_to_output_files.py** takes in scans*.txt and tracks*.txt files produced by
system deployment and append local time to each line. Now the system should handle it automatically.
- (depreciated) **post_hoc_counting** takes in tracks* files and compute estimated numbers of animals in
each bounding box. Now the system should handle it automatically.

#### Installation
1. See Detectron2 requirements
Expand Down Expand Up @@ -60,15 +74,21 @@ To run detection with GPU, check the cuda version at, for example, `/usr/local/c
- Monitor from local: `ssh -N -f -L localhost:9990:localhost:9991 username@server`
- Enter `localhost:9990` from a local browser tab

#### Developing a detection model
#### Develop a detection model
- **development** contains all training and evaluation scripts.
- To prepare a training dataset (i.e. rendering arrays from radar scans and
generating json files to define datasets with annotations), refer to
**Installation** and **Dataset Preparation** in the README of
[wsrdata](https://github.com/darkecology/wsrdata.git).
- Before training, optionally run **try_load_arrays.py** to make sure there's no broken npz files.

#### Run Inference
Latest model checkpoints are available
[here](https://drive.google.com/drive/folders/1ApVX-PFYVzRn4lgTZPJNFDHnUbhfcz6E?usp=sharing).
- v1: Beginning of Summer 2021 Zezhou model.
- v2: End of Summer 2021 Wenlong model with 48 AP. Better backbone, anchors, and other config.
- v3: End of Winter 2021 Gustavo model with 55 AP. Adapter layer and temporal features.

#### Deploy the system
A Colab notebook for running small-scale inference is
[here](https://colab.research.google.com/drive/1UD6qtDSAzFRUDttqsUGRhwNwS0O4jGaY?usp=sharing).
Large-scale deployment can be run on CPU servers as follows.
Expand All @@ -86,12 +106,13 @@ Review the updated AWS config.
```

3. Modify **demo.py** for system customization.
For example, DET_CFG can be changed to adopt a new detector.
For example, DET_CFG can be changed to adopt a new detector.
CNT_CFG can be changed for different counting assumptions.

4. Make sure the environment is activated. Then consider two deployment scenarios.
1. In the first, we process consecutive days at stations, i.e. we launch one job for
each set of continuous days at a station. Modify VARIABLES in **tools/launch_demo.py**.
Then under **tools**, run `python launch_demo.py`
each set of continuous days at a station.
Under **tools**, modify VARIABLES in **launch_demo.py** and run `python launch_demo.py`
to submit jobs to slurm and process multiple batches of data.

2. In the second, we process scattered days at stations, i.e. we launch one job for
Expand All @@ -109,16 +130,34 @@ For example, DET_CFG can be changed to adopt a new detector.
EXPERIMENT_NAME output directory. Thereby when we copy newly processed data to the server
that hosts the web UI, previous data won't need to be copied again.

#### Deployment Log
Model checkpoints are available [here](https://drive.google.com/drive/folders/1ApVX-PFYVzRn4lgTZPJNFDHnUbhfcz6E?usp=sharing).
- v1: Beginning of Summer 2021 Zezhou model.
- v2: End of Summer 2021 Wenlong model with 48 AP. Good backbone, anchors, etc.
- v3: End of Winter 2021 Gustavo model with 55 AP. Adapter layer and temporal features.
#### Notes about array, image, and annotation directions
- geometric direction: large y is North (row 0 is South), large x is East
- image direction: large y is South (row 0 is North), large x is East
1. Rendering
1. [Render arrays](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/data/renderer.py#L13)
for the model to process in the **geographic** direction
2. [Render png images](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/data/renderer.py#L161)
for visualization in the **image** direction
3. Generate the list of scans with successfully rendered arrays
2. Detector in the **geographic** direction
1. During training and evaluation, doesn’t use our defined
[Detector class](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/system.py#L27)
1. [dataloader](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/development/experiments_v2/train_roost_detector.py#L220):
XYXY
2. During deployment, use our defined
[Detector class](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/system.py#L27)
which wraps a Predictor. The run function of this Detector [flips the y axis](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/detection/detector.py#L115) of predicted boxes to get the **image** direction and outputs [predicted boxes](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/detection/detector.py#L118) in xyr where xy are center coordinates
4. For rain removal post-processing using dualpol arrays,
[flip the y axis](https://github.com/darkecology/roost-system/blob/b27ffd17e773dfeaedac2a79d453395614e8b679/src/roosts/utils/postprocess.py#L188)
to operate in the **image** direction
5. Generate the list of predicted tracks to accompany png images in the **image** direction


#### Website Visualization
In the generated csv files, the following information could be used to further filter the tracks:
#### User Interface Visualization
In the generated csv files that can be imported to a user interface for visualization,
the following information could be used to further filter the tracks:
- track length
- detection scores (-1 represents the bbox is not from detector, instead, our tracking algorithm)
- detection scores (-1 represents that the bbox is not from detector, instead, our tracking algorithm)
- bbox sizes
- the minutes from sunrise/sunset of the first bbox in a track

Expand Down
4 changes: 2 additions & 2 deletions development/experiments_v3_linear_adaptor/adaptors_fpn.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ def __init__(
"""
super(Adaptor_FPN, self).__init__()
assert isinstance(bottom_up, Backbone)

# Feature map strides and channels from the bottom up network (e.g. ResNet)
input_shapes = bottom_up.output_shape()
in_strides = [input_shapes[f].stride for f in in_features]
Expand Down Expand Up @@ -136,7 +136,7 @@ def __init__(

lateral_convs.append(lateral_conv)
output_convs.append(output_conv)

# GPS: adaptor
self.is_adaptor = adaptor
if adaptor == 'linear':
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@
#### GPS: when using adaptors ######################
####################################################
if args.adaptor != 'None':
cfg.MODEL.BACKBONE.NAME = 'custom_build_resnet_fpn_backbone'
cfg.MODEL.BACKBONE.NAME = 'build_adaptor_resnet_fpn_backbone'
cfg.ADAPTOR_TYPE = args.adaptor
cfg.ADAPTOR_IN_CHANNELS = len(CHANNELS)*3
cfg.MODEL.PIXEL_MEAN = []
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#!/bin/sh
#
#BATCH --job-name=150
#SBATCH --job-name=150
#SBATCH -o /mnt/nfs/scratch1/gperezsarabi/darkecology/roost-system/development/experiments/gypsum_logs/150.txt
#SBATCH --partition=rtx8000-long
#SBATCH --gres=gpu:1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@
#### GPS: when using adaptors ######################
####################################################
if args.adaptor != 'None':
cfg.MODEL.BACKBONE.NAME = 'custom_build_resnet_fpn_backbone'
cfg.MODEL.BACKBONE.NAME = 'build_adaptor_resnet_fpn_backbone'
cfg.ADAPTOR_TYPE = args.adaptor
cfg.ADAPTOR_IN_CHANNELS = len(CHANNELS) * 3
cfg.MODEL.PIXEL_MEAN = []
Expand Down
19 changes: 19 additions & 0 deletions development/experiments_v4_maskrcnn/10/logs/collected_results.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
59.868 61.742 60.690 59.581 56.578 57.791
61.490 63.330 62.444 59.706 59.957 56.908
62.047 63.143 63.261 61.664 61.078 57.814
61.333 64.117 62.834 61.956 60.327 59.377
63.773 63.680 64.338 62.759 60.603 60.742
62.363 64.611 64.303 59.368 61.744 60.659
62.026 64.398 62.183 61.437 60.890 54.307
64.037 64.489 64.958 61.062 60.357 61.097
62.174 63.692 63.536 61.775 60.627 60.479
63.722 64.667 64.593 63.211 61.016 62.302
60.428 61.446 61.039 58.615 59.548 57.710
61.804 62.868 62.204 59.927 60.188 58.440
62.246 62.848 63.075 61.672 59.774 59.022
62.364 64.171 63.078 60.910 59.773 59.453
63.589 64.239 64.561 62.847 61.462 61.639
63.520 64.051 63.407 60.870 60.871 55.785
63.271 63.745 62.239 63.103 60.222 59.818
64.021 64.964 64.346 64.260 60.861 60.016
63.574 64.589 64.897 62.112 60.495 60.699
Loading