You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
def report_success(job, connection, result, *args, **kwargs):
try:
analysis = Analysis.objects.get(id=job.id)
analysis.status = Analysis.Status.COMPLETED
if "prediction" in result:
PredictionResult.objects.create(
analysis=analysis, prediction=result["prediction"]
)
if "segmentation" in result:
# handle decoding and storing of base64 encoded segmentation masks
for tp, (name, data) in result["segmentation"].items():
# decode base64 string into binary and save to filefield
f = ContentFile(content=base64.b64decode(data), name=name)
SegmentationResult.objects.create(analysis=analysis, segmentation_mask=f, mask_type=tp)
if "artifacts" in result:
# handle decoding and storing of intermediate model artifacts
for tp, (name, data) in result["artifacts"].items():
f = ContentFile(content=base64.b64decode(data), name=name)
AnalysisArtifact.objects.create(analysis=analysis, artifact=f, artifact_type=tp)
analysis.save()
logger.info(f"Analysis {analysis.id} processed successfully")
except Analysis.DoesNotExist:
logger.error(f"Analysis with job id {job.id} not found")
Basically, it handles saving of base64 encoded data into the respective Django model filefields.
When the job execution completes, RQ fails to remove the job from the StartedJobRegistry:
ai_worker-1 | Analysis 8df26860-bdcc-49da-a537-43088e7bddca processed successfully
ai_worker-1 | Handling successful execution of job 8df26860-bdcc-49da-a537-43088e7bddca
ai_worker-1 | Saving job 8df26860-bdcc-49da-a537-43088e7bddca's successful execution result
ai_worker-1 | Removing job 8df26860-bdcc-49da-a537-43088e7bddca from StartedJobRegistry
ai_worker-1 | Saving job 8df26860-bdcc-49da-a537-43088e7bddca's successful execution result
ai_worker-1 | Sent heartbeat to prevent worker timeout. Next one should arrive in 90 seconds.
ai_worker-1 | Removing job 8df26860-bdcc-49da-a537-43088e7bddca from StartedJobRegistry
ai_worker-1 | Saving job 8df26860-bdcc-49da-a537-43088e7bddca's successful execution result
The last two log lines basically repeat themselves till the job timeout is reached.
Any ideas to debug this issue?
The text was updated successfully, but these errors were encountered:
I've found that it was the size of the result json that was too large to be saved in redis, setting result_ttl=0 in the job decorator is a workaround for this issue. Is there any way to specify an option to not save the results in redis, but save the job execution detail instead?
I have defined a job using the
job
decorator syntax like:I have also defined the success callback handler:
Basically, it handles saving of base64 encoded data into the respective Django model filefields.
When the job execution completes, RQ fails to remove the job from the
StartedJobRegistry
:The last two log lines basically repeat themselves till the job timeout is reached.
Any ideas to debug this issue?
The text was updated successfully, but these errors were encountered: