You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
input is SparseConvTensor (check spconv lib). batch record the ID (due to spconv lib) xyz is the lidar coordinates (check the code in dataloader and the function related to xyz in backbone.py)
So in training mode for a single entry in a minibatch, this would be torch.arange([number of samples])?
And in evaluation mode, this would be set in collation_fn_voxelmean_tta using the inds_recons value returned from the loader?
Hi.
I am attempting to replicate the training of the SFPNet in order to investigate other training improvements.
Can you please give me some insight on how to do the training? I don't see any example training code in this repo.
Specifically, in the Semantic model's
forward
method, there are three inputs -input_data
,xyz
andbatch
.I assume that
input_data
is a float tensor of shape BxNxF, where the first three in the F dimension are also x, y and z.Is the
xyz
the voxel coordinates as a long tensor or the original lidar coordinates as a float tensor? Is the shape BxNx3?Is the batch coordinate just torch.arange(batch_size)? Is the shape B?
Thank you for your help.
Avi
The text was updated successfully, but these errors were encountered: