Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: can't pickle _thread.lock objects #424

Closed
OnufryKlaczynski opened this issue Mar 14, 2020 · 48 comments
Closed

TypeError: can't pickle _thread.lock objects #424

OnufryKlaczynski opened this issue Mar 14, 2020 · 48 comments

Comments

@OnufryKlaczynski
Copy link

Django 2.2.11
python 3.7.0
django-q 1.2.1
windows 10

Hello, when i run manage.py qcluster i get error, does somebody know what could be source of it and how to resolve it?

  File "manage.py", line 21, in <module>
    main()
  File "manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line
    utility.execute()
  File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\__init__.py", line 375, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\base.py", line 323, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django\core\management\base.py", line 364, in execute
    output = self.handle(*args, **options)
  File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
    q.start()
  File "C:\Users\Mateusz\Desktop\project\env\lib\site-packages\django_q\cluster.py", line 65, in start
    self.sentinel.start()
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\Mateusz\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
@Koed00
Copy link
Owner

Koed00 commented Mar 15, 2020

Also happening to MacOs users in #389
Seems related to some changes in Python 3.7 & 3.8
We don't see it happening in Linux though and apparently also not in Pyton 3.6

@Koed00
Copy link
Owner

Koed00 commented Mar 16, 2020

The problem lies with the shared Value object the workers keep around as timers. Those were made to be 'process safe' by adding locks. In theory this should just work fine on all platforms, but I guess some minor chages were added in the newer Pythons.

It's very hard for me to test, cause I have neither MacOs or Windows as a development environment, and this will probably one of those cases where the python documentation hasn't kept up yet and I will just have to try stuff.

That said; there is some inidication that setting the lock type explicitly on the ctype instance might help. I've done this in the lock branch here : a1f211d

Unless someone wants to help out, it will be a while before I can test this on one of the afflicted OS's.

@Solanar
Copy link

Solanar commented Mar 18, 2020

I tested this out on Windows Py 3.8, it doesn't work. The issue is that the broker connection object is unpickleable, possibly due to the network connection being process local. In spawn context multiprocessing (MacOS/Windows) new processes are not child processes like in Linux, they're separate. You should be able to test this environment in Linux with multiprocessing.set_start_method('spawn')

from django_q.brokers import get_broker
import pickle
pickle.dumps(get_broker().connection)
# TypeError: cannot pickle '_thread.lock' object

@OnufryKlaczynski
Copy link
Author

OnufryKlaczynski commented Mar 21, 2020

python 3.7.0 windows, no problems with this code

>>> from django_q.brokers import get_broker
>>> import pickle
>>> pickle.dumps(get_broker().connection)
b'\x80\x03cdjango.db.models.query\nQuerySet\nq\x00)\x81q\x01}q\x02(X\x05\x00\x00\x00modelq\x03cdjango_q.models\nOrmQ\nq\x04X\x03\x00\x00\x00_dbq\x05X\x07\x00\x00\x00defaultq\x06X\x06\x00\x00\x00_hintsq\x07}q\x08X\x05\x00\x00\x00queryq\tcdjango.db.models.sql.query\nQuery\nq\n)\x81q\x0b}q\x0c(h\x03h\x04X\x0e\x00\x00\x00alias_refcountq\r}q\x0eX\r\x00\x00\x00django_q_ormqq\x0fK\x00sX\t\x00\x00\x00alias_mapq\x10ccollections\nOrderedDict\nq\x11)Rq\x12h\x0fcdjango.db.models.sql.datastructures\nBaseTable\nq\x13)\x81q\x14}q\x15(X\n\x00\x00\x00table_nameq\x16h\x0fX\x0b\x00\x00\x00table_aliasq\x17h\x0fubsX\x10\x00\x00\x00external_aliasesq\x18cbuiltins\nset\nq\x19]q\x1a\x85q\x1bRq\x1cX\t\x00\x00\x00table_mapq\x1d}q\x1eh\x0f]q\x1fh\x0fasX\x0c\x00\x00\x00default_colsq \x88X\x10\x00\x00\x00default_orderingq!\x88X\x11\x00\x00\x00standard_orderingq"\x88X\x0c\x00\x00\x00used_aliasesq#h\x19]q$\x85q%Rq&X\x10\x00\x00\x00filter_is_stickyq\'\x89X\x08\x00\x00\x00subqueryq(\x89X\x06\x00\x00\x00selectq))X\x05\x00\x00\x00whereq*cdjango.db.models.sql.where\nWhereNode\nq+)\x81q,}q-(X\x08\x00\x00\x00childrenq.]q/X\t\x00\x00\x00connectorq0X\x03\x00\x00\x00ANDq1X\x07\x00\x00\x00negatedq2\x89X\x12\x00\x00\x00contains_aggregateq3\x89ubX\x0b\x00\x00\x00where_classq4h+X\x08\x00\x00\x00group_byq5NX\x08\x00\x00\x00order_byq6)X\x08\x00\x00\x00low_markq7K\x00X\t\x00\x00\x00high_markq8NX\x08\x00\x00\x00distinctq9\x89X\x0f\x00\x00\x00distinct_fieldsq:)X\x11\x00\x00\x00select_for_updateq;\x89X\x18\x00\x00\x00select_for_update_nowaitq<\x89X\x1d\x00\x00\x00select_for_update_skip_lockedq=\x89X\x14\x00\x00\x00select_for_update_ofq>)X\x0e\x00\x00\x00select_relatedq?\x89X\t\x00\x00\x00max_depthq@K\x05X\r\x00\x00\x00values_selectqA)X\x0c\x00\x00\x00_annotationsqBNX\x16\x00\x00\x00annotation_select_maskqCNX\x18\x00\x00\x00_annotation_select_cacheqDNX\n\x00\x00\x00combinatorqENX\x0e\x00\x00\x00combinator_allqF\x89X\x10\x00\x00\x00combined_queriesqG)X\x06\x00\x00\x00_extraqHNX\x11\x00\x00\x00extra_select_maskqINX\x13\x00\x00\x00_extra_select_cacheqJNX\x0c\x00\x00\x00extra_tablesqK)X\x0e\x00\x00\x00extra_order_byqL)X\x10\x00\x00\x00deferred_loadingqMcbuiltins\nfrozenset\nqN]qO\x85qPRqQ\x88\x86qRX\x13\x00\x00\x00_filtered_relationsqS}qTX\r\x00\x00\x00explain_queryqU\x89X\x0e\x00\x00\x00explain_formatqVNX\x0f\x00\x00\x00explain_optionsqW}qXX\n\x00\x00\x00base_tableqYh\x0fubX\r\x00\x00\x00_result_cacheqZ]q[X\x0e\x00\x00\x00_sticky_filterq\\\x89X\n\x00\x00\x00_for_writeq]\x89X\x19\x00\x00\x00_prefetch_related_lookupsq^)X\x0e\x00\x00\x00_prefetch_doneq_\x89X\x16\x00\x00\x00_known_related_objectsq`}qaX\x0f\x00\x00\x00_iterable_classqbcdjango.db.models.query\nModelIterable\nqcX\x07\x00\x00\x00_fieldsqdNX\x0f\x00\x00\x00_django_versionqeX\x06\x00\x00\x002.2.11qfub.'

The whole Broker object is unpickable

@Solanar
Copy link

Solanar commented Mar 21, 2020

python 3.7.0 windows, no problems with this code

>>> from django_q.brokers import get_broker
>>> import pickle
>>> pickle.dumps(get_broker().connection)
b'\x80\x03cdjango.db.models.query\nQuerySet\nq\x00)\x81q\x01}q\x02(X\x05\x00\x00\x00modelq\x03cdjango_q.models\nOrmQ\nq\x04X\x03\x00\x00\x00_dbq\x05X\x07\x00\x00\x00defaultq\x06X\x06\x00\x00\x00_hintsq\x07}q\x08X\x05\x00\x00\x00queryq\tcdjango.db.models.sql.query\nQuery\nq\n)\x81q\x0b}q\x0c(h\x03h\x04X\x0e\x00\x00\x00alias_refcountq\r}q\x0eX\r\x00\x00\x00django_q_ormqq\x0fK\x00sX\t\x00\x00\x00alias_mapq\x10ccollections\nOrderedDict\nq\x11)Rq\x12h\x0fcdjango.db.models.sql.datastructures\nBaseTable\nq\x13)\x81q\x14}q\x15(X\n\x00\x00\x00table_nameq\x16h\x0fX\x0b\x00\x00\x00table_aliasq\x17h\x0fubsX\x10\x00\x00\x00external_aliasesq\x18cbuiltins\nset\nq\x19]q\x1a\x85q\x1bRq\x1cX\t\x00\x00\x00table_mapq\x1d}q\x1eh\x0f]q\x1fh\x0fasX\x0c\x00\x00\x00default_colsq \x88X\x10\x00\x00\x00default_orderingq!\x88X\x11\x00\x00\x00standard_orderingq"\x88X\x0c\x00\x00\x00used_aliasesq#h\x19]q$\x85q%Rq&X\x10\x00\x00\x00filter_is_stickyq\'\x89X\x08\x00\x00\x00subqueryq(\x89X\x06\x00\x00\x00selectq))X\x05\x00\x00\x00whereq*cdjango.db.models.sql.where\nWhereNode\nq+)\x81q,}q-(X\x08\x00\x00\x00childrenq.]q/X\t\x00\x00\x00connectorq0X\x03\x00\x00\x00ANDq1X\x07\x00\x00\x00negatedq2\x89X\x12\x00\x00\x00contains_aggregateq3\x89ubX\x0b\x00\x00\x00where_classq4h+X\x08\x00\x00\x00group_byq5NX\x08\x00\x00\x00order_byq6)X\x08\x00\x00\x00low_markq7K\x00X\t\x00\x00\x00high_markq8NX\x08\x00\x00\x00distinctq9\x89X\x0f\x00\x00\x00distinct_fieldsq:)X\x11\x00\x00\x00select_for_updateq;\x89X\x18\x00\x00\x00select_for_update_nowaitq<\x89X\x1d\x00\x00\x00select_for_update_skip_lockedq=\x89X\x14\x00\x00\x00select_for_update_ofq>)X\x0e\x00\x00\x00select_relatedq?\x89X\t\x00\x00\x00max_depthq@K\x05X\r\x00\x00\x00values_selectqA)X\x0c\x00\x00\x00_annotationsqBNX\x16\x00\x00\x00annotation_select_maskqCNX\x18\x00\x00\x00_annotation_select_cacheqDNX\n\x00\x00\x00combinatorqENX\x0e\x00\x00\x00combinator_allqF\x89X\x10\x00\x00\x00combined_queriesqG)X\x06\x00\x00\x00_extraqHNX\x11\x00\x00\x00extra_select_maskqINX\x13\x00\x00\x00_extra_select_cacheqJNX\x0c\x00\x00\x00extra_tablesqK)X\x0e\x00\x00\x00extra_order_byqL)X\x10\x00\x00\x00deferred_loadingqMcbuiltins\nfrozenset\nqN]qO\x85qPRqQ\x88\x86qRX\x13\x00\x00\x00_filtered_relationsqS}qTX\r\x00\x00\x00explain_queryqU\x89X\x0e\x00\x00\x00explain_formatqVNX\x0f\x00\x00\x00explain_optionsqW}qXX\n\x00\x00\x00base_tableqYh\x0fubX\r\x00\x00\x00_result_cacheqZ]q[X\x0e\x00\x00\x00_sticky_filterq\\\x89X\n\x00\x00\x00_for_writeq]\x89X\x19\x00\x00\x00_prefetch_related_lookupsq^)X\x0e\x00\x00\x00_prefetch_doneq_\x89X\x16\x00\x00\x00_known_related_objectsq`}qaX\x0f\x00\x00\x00_iterable_classqbcdjango.db.models.query\nModelIterable\nqcX\x07\x00\x00\x00_fieldsqdNX\x0f\x00\x00\x00_django_versionqeX\x06\x00\x00\x002.2.11qfub.'

The whole Broker object is unpickable

It's dependent on your Q_CLUSTER settings, that looks like you're using the django ORM as a broker and not redis

@faulander
Copy link

Windows 10,
Django ORM,
Django Q 1.24,
Python 3.8.0

  File "manage.py", line 21, in <module>
    main()
  File "manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "C:\Users\hf\Projects\P4S\src\.venv\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line
    utility.execute()
  File "C:\Users\hf\Projects\P4S\src\.venv\lib\site-packages\django\core\management\__init__.py", line 395, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\hf\Projects\P4S\src\.venv\lib\site-packages\django\core\management\base.py", line 328, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\hf\Projects\P4S\src\.venv\lib\site-packages\django\core\management\base.py", line 369, in execute
    output = self.handle(*args, **options)
  File "C:\Users\hf\Projects\P4S\src\.venv\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "C:\Users\hf\Projects\P4S\src\.venv\lib\site-packages\django_q\cluster.py", line 66, in start
    self.sentinel.start()
  File "C:\Python38\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Python38\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python38\lib\multiprocessing\context.py", line 326, in _Popen
    return Popen(process_obj)
  File "C:\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python38\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Python38\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input```

@faulander
Copy link

Workaround: Develop in WSL, it works good there: https://code.visualstudio.com/docs/remote/wsl

09:34:39 [Q] INFO Q Cluster dakota-kilo-zebra-monkey starting.
09:34:39 [Q] INFO Process-1:1 ready for work at 663
09:34:39 [Q] INFO Process-1:2 ready for work at 664
09:34:39 [Q] INFO Process-1:3 ready for work at 665
09:34:39 [Q] INFO Process-1:4 ready for work at 666
09:34:39 [Q] INFO Process-1:5 monitoring at 667
09:34:39 [Q] INFO Process-1 guarding cluster dakota-kilo-zebra-monkey
09:34:39 [Q] INFO Q Cluster dakota-kilo-zebra-monkey running.
09:34:39 [Q] INFO Process-1:6 pushing tasks at 668

@warabak
Copy link

warabak commented Jun 28, 2020

  • Python 3.8.0
  • Mac OS (10.15.5)
  • django-q==1.2.4
  • boto3==1.14.12
  • redis==3.5.3

Redis

Traceback (most recent call last):
  File "./manage.py", line 21, in <module>
    main()
  File "./manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
    utility.execute()
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/base.py", line 328, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/base.py", line 369, in execute
    output = self.handle(*args, **options)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django_q/management/commands/qcluster.py", line 22, in handle
    q.start()
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django_q/cluster.py", line 66, in start
    self.sentinel.start()
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
    return Popen(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object

SQS

Traceback (most recent call last):
  File "./manage.py", line 21, in <module>
    main()
  File "./manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
    utility.execute()
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/base.py", line 328, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django/core/management/base.py", line 369, in execute
    output = self.handle(*args, **options)
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django_q/management/commands/qcluster.py", line 22, in handle
    q.start()
  File "/Users/warabak/.pyenv/versions/servitor-3.8.0/lib/python3.8/site-packages/django_q/cluster.py", line 66, in start
    self.sentinel.start()
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
    return Popen(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/Users/warabak/.pyenv/versions/3.8.0/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'boto3.resources.factory.sqs.ServiceResource'>: attribute lookup sqs.ServiceResource on boto3.resources.factory failed

@Koed00
Copy link
Owner

Koed00 commented Jul 22, 2020 via email

@NotSoSmartDev
Copy link

I removed my previous comment because of new exception, when I switched to FileBasedCache backend I got same error as in #313, so it's not really fix.

@rotoyang
Copy link

rotoyang commented Aug 4, 2020

macOS 10.15.6,
Django ORM,
Django-Q 1.1.0,
Python 3.7.6

In [1]: from django_q.brokers import get_broker                                                                                 
In [2]: import pickle                                                                                                           
In [3]: pickle.dumps(get_broker().connection)                                                                                   
Out[3]: b'\x80\x03cdjango.db.models.query\nQuerySet\nq\x00)\x81q\x01}q\x02(X\x05\x00\x00\x00modelq\x03cdjango_q.models\nOrmQ\nq\x04X\x03\x00\x00\x00_dbq\x05X\x07\x00\x00\x00defaultq\x06X\x06\x00\x00\x00_hintsq\x07}q\x08X\x05\x00\x00\x00queryq\tcdjango.db.models.sql.query\nQuery\nq\n)\x81q\x0b}q\x0c(h\x03h\x04X\x0e\x00\x00\x00alias_refcountq\r}q\x0eX\r\x00\x00\x00django_q_ormqq\x0fK\x00sX\t\x00\x00\x00alias_mapq\x10ccollections\nOrderedDict\nq\x11)Rq\x12h\x0fcdjango.db.models.sql.datastructures\nBaseTable\nq\x13)\x81q\x14}q\x15(X\n\x00\x00\x00table_nameq\x16h\x0fX\x0b\x00\x00\x00table_aliasq\x17h\x0fubsX\x10\x00\x00\x00external_aliasesq\x18cbuiltins\nset\nq\x19]q\x1a\x85q\x1bRq\x1cX\t\x00\x00\x00table_mapq\x1d}q\x1eh\x0f]q\x1fh\x0fasX\x0c\x00\x00\x00default_colsq \x88X\x10\x00\x00\x00default_orderingq!\x88X\x11\x00\x00\x00standard_orderingq"\x88X\x0c\x00\x00\x00used_aliasesq#h\x19]q$\x85q%Rq&X\x10\x00\x00\x00filter_is_stickyq\'\x89X\x08\x00\x00\x00subqueryq(\x89X\x06\x00\x00\x00selectq))X\x05\x00\x00\x00whereq*cdjango.db.models.sql.where\nWhereNode\nq+)\x81q,}q-(X\x08\x00\x00\x00childrenq.]q/X\t\x00\x00\x00connectorq0X\x03\x00\x00\x00ANDq1X\x07\x00\x00\x00negatedq2\x89X\x12\x00\x00\x00contains_aggregateq3\x89ubX\x0b\x00\x00\x00where_classq4h+X\x08\x00\x00\x00group_byq5NX\x08\x00\x00\x00order_byq6)X\x08\x00\x00\x00low_markq7K\x00X\t\x00\x00\x00high_markq8NX\x08\x00\x00\x00distinctq9\x89X\x0f\x00\x00\x00distinct_fieldsq:)X\x11\x00\x00\x00select_for_updateq;\x89X\x18\x00\x00\x00select_for_update_nowaitq<\x89X\x1d\x00\x00\x00select_for_update_skip_lockedq=\x89X\x14\x00\x00\x00select_for_update_ofq>)X\x0e\x00\x00\x00select_relatedq?\x89X\t\x00\x00\x00max_depthq@K\x05X\r\x00\x00\x00values_selectqA)X\x0c\x00\x00\x00_annotationsqBNX\x16\x00\x00\x00annotation_select_maskqCNX\x18\x00\x00\x00_annotation_select_cacheqDNX\n\x00\x00\x00combinatorqENX\x0e\x00\x00\x00combinator_allqF\x89X\x10\x00\x00\x00combined_queriesqG)X\x06\x00\x00\x00_extraqHNX\x11\x00\x00\x00extra_select_maskqINX\x13\x00\x00\x00_extra_select_cacheqJNX\x0c\x00\x00\x00extra_tablesqK)X\x0e\x00\x00\x00extra_order_byqL)X\x10\x00\x00\x00deferred_loadingqMcbuiltins\nfrozenset\nqN]qO\x85qPRqQ\x88\x86qRX\x13\x00\x00\x00_filtered_relationsqS}qTX\r\x00\x00\x00explain_queryqU\x89X\x0e\x00\x00\x00explain_formatqVNX\x0f\x00\x00\x00explain_optionsqW}qXX\n\x00\x00\x00base_tableqYh\x0fubX\r\x00\x00\x00_result_cacheqZ]q[X\x0e\x00\x00\x00_sticky_filterq\\\x89X\n\x00\x00\x00_for_writeq]\x89X\x19\x00\x00\x00_prefetch_related_lookupsq^)X\x0e\x00\x00\x00_prefetch_doneq_\x89X\x16\x00\x00\x00_known_related_objectsq`}qaX\x0f\x00\x00\x00_iterable_classqbcdjango.db.models.query\nModelIterable\nqcX\x07\x00\x00\x00_fieldsqdNX\x0f\x00\x00\x00_django_versionqeX\x05\x00\x00\x002.2.9qfub.'

@hjmorales
Copy link

Also happening to MacOs users in #389
Seems related to some changes in Python 3.7 & 3.8
We don't see it happening in Linux though and apparently also not in Pyton 3.6

Python 3.6 doesn't work either

Windows 10
Django ORM
django-q 1.3.2
django 3
python 3.6.6

Great project BTW, thanks!

Traceback (most recent call last):
  File "manage.py", line 21, in <module>
    main()
  File "manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "D:\Users\hj\project\smsapp\venv\lib\site-packages\django\core\management\__init__.py", line 401, in
execute_from_command_line
    utility.execute()
  File "D:\Users\hj\project\smsapp\venv\lib\site-packages\django\core\management\__init__.py", line 395, in
execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "D:\Users\hj\project\smsapp\venv\lib\site-packages\django\core\management\base.py", line 330, in run_
from_argv
    self.execute(*args, **cmd_options)
  File "D:\Users\hj\project\smsapp\venv\lib\site-packages\django\core\management\base.py", line 371, in exec
ute
    output = self.handle(*args, **options)
  File "D:\Users\hj\project\smsapp\venv\lib\site-packages\django_q\management\commands\qcluster.py", line 22
, in handle
    q.start()
  File "D:\Users\hj\project\smsapp\venv\lib\site-packages\django_q\cluster.py", line 67, in start
    self.sentinel.start()
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects

(venv) D:\Users\hj\project\smsapp>Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 99, in spawn_main
    new_handle = reduction.steal_handle(parent_pid, pipe_handle)
  File "C:\Users\hj\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 82, in steal_handle
    _winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] El parámetro no es correcto

@MushroomMaula
Copy link

I have the same issue however its occuring to me when running python manage.py qcluster.

Windows 10
Django 3.0.8
django-q 1.3.2
Python 3.8.5
Traceback (most recent call last):
  File "C:/Users/Max/PycharmProjects/summer-code-jam-2020/concerned-coyotes/earlyinternet/manage.py", line 21, in <module>
    main()
  File "C:/Users/Max/PycharmProjects/summer-code-jam-2020/concerned-coyotes/earlyinternet/manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line
    utility.execute()
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\__init__.py", line 395, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\base.py", line 328, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\base.py", line 369, in execute
    output = self.handle(*args, **options)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django_q\cluster.py", line 67, in start
    self.sentinel.start()
  File "C:\Python\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Python\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Python\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object
Traceback (most recent call last):
  File "manage.py", line 21, in <module>
    main()
  File "manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line
    utility.execute()
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\__init__.py", line 395, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\base.py", line 328, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django\core\management\base.py", line 369, in execute
    output = self.handle(*args, **options)
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "C:\Users\Max\PycharmProjects\summer-code-jam-2020\venv\lib\site-packages\django_q\cluster.py", line 67, in start
    self.sentinel.start()
  File "C:\Python\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Python\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Python\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Python\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

@c-goosen
Copy link

I have come accross this issue as well. Definitely Mac OSX related and from what I can tell how data is pickled (serialized) between processes. Linux (Ubuntu 20.04) has no issues, related to Mac and Windows handling of multiprocessing. I might take a stab at this one if no one else has any ideas.

Also caching is not the issue, but also causes an error.

@c-goosen
Copy link

Also qinfo and qmonitor are not affected

@roharvey
Copy link

roharvey commented Sep 4, 2020

The cause of this is a duplicate of #389, though for Windows it has defaulted to spawn for longer than just python3.8.
See an explanation at #389 (comment)

@JosiahDub
Copy link

I've dealt with this issue off and on. I found out that if I use the Dummy cache I do not get the error. Switching to django-redis as the cache gives me the error.

python 3.8.1
django 3.1
django-q 1.3.3
MacOS 10.15.6
redis 3.5.3
django-redis 4.12.1

@NotSoSmartDev
Copy link

The problem lies with the shared Value object the workers keep around as timers. Those were made to be 'process safe' by adding locks. In theory this should just work fine on all platforms, but I guess some minor chages were added in the newer Pythons.

It's very hard for me to test, cause I have neither MacOs or Windows as a development environment, and this will probably one of those cases where the python documentation hasn't kept up yet and I will just have to try stuff.

That said; there is some inidication that setting the lock type explicitly on the ctype instance might help. I've done this in the lock branch here : a1f211d

Unless someone wants to help out, it will be a while before I can test this on one of the afflicted OS's.

Hello, @Koed00 , found interesting tool for Linux users which let you test Django-Q on Mac, maybe you will be interested in - https://github.com/fastai/fastmac

@roharvey
Copy link

I found a simple way to test the behavior on linux. I think a better description of the issue is "django-q doesn't work in spawn mode". Sometimes it produces this error, sometimes others. If you change the linux multiprocessing method to spawn, you can see the issues.

On a linux system, I edited my manage.py file (so it is early in the startup procedure) with the following:

import multiprocessing
multiprocessing.set_start_method("spawn")
print(f"Changed start method to {multiprocessing.get_start_method()}")

This appears to work fine with most django operations, such as spawning the shell and runserver:

~/venvs/current$ manage.py shell
Changed start method to spawn
Python 3.8.5 (default, Jul 20 2020, 19:50:14) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from django.contrib.auth.models import Permission
>>> Permission.objects.count()
304
>>>  
now exiting InteractiveConsole...
~/venvs/current$
~/venvs/current$ manage.py runserver
Changed start method to spawn
Changed start method to spawn
2020-09-16 09:35:22,869 INFO [django.utils.autoreload.run_with_reloader:612] Watching for file changes with StatReloader
...
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
^C~/venvs/current$

But hangs on qcluster (it doesn't properly load and I have to kill -9 the process)

~/venvs/current$ manage.py qcluster
Changed start method to spawn
...
09:36:57 [Q] INFO Q Cluster-15637 starting.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "/srv/iris/venvs/b19ece8e6c0512d2e3ca5f5f325c7b6bdef9ae2c/lib/python3.8/site-packages/django_q/cluster.py", line 24, in <module>
    import django_q.tasks
  File "/srv/iris/venvs/b19ece8e6c0512d2e3ca5f5f325c7b6bdef9ae2c/lib/python3.8/site-packages/django_q/tasks.py", line 14, in <module>
    from django_q.models import Schedule, Task
  File "/srv/iris/venvs/b19ece8e6c0512d2e3ca5f5f325c7b6bdef9ae2c/lib/python3.8/site-packages/django_q/models.py", line 15, in <module>
    class Task(models.Model):
  File "/srv/iris/venvs/b19ece8e6c0512d2e3ca5f5f325c7b6bdef9ae2c/lib/python3.8/site-packages/django/db/models/base.py", line 108, in __new__
    app_config = apps.get_containing_app_config(module)
  File "/srv/iris/venvs/b19ece8e6c0512d2e3ca5f5f325c7b6bdef9ae2c/lib/python3.8/site-packages/django/apps/registry.py", line 252, in get_containing_app_config
    self.check_apps_ready()
  File "/srv/iris/venvs/b19ece8e6c0512d2e3ca5f5f325c7b6bdef9ae2c/lib/python3.8/site-packages/django/apps/registry.py", line 135, in check_apps_ready
    raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
Sentry is attempting to send 0 pending error messages
Waiting up to 2 seconds
Press Ctrl-C to quit
^C^C^C^Z
[1]+  Stopped                 manage.py qcluster
~/venvs/current$ kill -9 %1
[1]+  Stopped                 manage.py qcluster
~/venvs/current$ 
[1]+  Killed                  manage.py qcluster
~/venvs/current$

Without changing the spawn method, qcluster starts up normally.

@mohmyo
Copy link

mohmyo commented Sep 18, 2020

Tried on Windows 10 with python 3.6 and 3.7, no luck with both.

@mohmyo
Copy link

mohmyo commented Sep 19, 2020

As @faulander pointed out, using WSL on windows 10 can solve the problem, it was quicker to set up than I thought.

@jeroenbrouwer
Copy link

jeroenbrouwer commented Sep 26, 2020

I didn't see issue #389 before, so I'm also commenting in this thread as it may help others. Mentioned issue provided a workaround for me. I ended up adding the following snippet to my manage.py and it works :-)

import platform
if platform.system() != "Linux":
    from multiprocessing import set_start_method
    set_start_method("fork")

My setup is macOS 10.15.6, django-q 1.3.3 with redis, python 3.8.5

@mohmyo
Copy link

mohmyo commented Sep 26, 2020

@jeroenbrouwer This doesn't work for windows

@ihuk
Copy link

ihuk commented Oct 14, 2020

I looked into this some more. Here are my findings.

Depending on platform multiprocessing supports three ways to start a process: spawn, fork, and forkserver. In Python 3.8 default start method was changed to spawn.

spawn start method enforces few extra restrictions which do not apply when using fork start method. Most notably that all arguments to Process.__init__ must be picklable.

Looking at cluster.py the first problem is that broker passed into Process in Cluster.start is not pickable.

self.sentinel = Process(
        target=Sentinel,
        args=(
            self.stop_event,
            self.start_event,
            self.cluster_id,
            self.broker,
            self.timeout,
        ),
    )

I tried a quick hack and replaced self.broker with None. That made it so it no longer failed with above error. Instead it failed with django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.

I added apps.check_apps_ready() check before django-q imports and that helped but not a whole lot. It was then failing in Cluster.spawn_monitor and Cluster.spawn_pusher which both pass broker to Cluster.spawn_process which in turn passes it to Process.__init__.

I set broker to None in both cases and was able to make progress. Next error was AttributeError: 'Queue' object has no attribute 'size' in django_q.queues.Queue. It appears as if django_q.queues.Queue is not pickled correctly. I was able to fix that by overriding __getstate__ and __setstate__ methods:

def __getstate__(self):
    return super(Queue, self).__getstate__() + (self.size, )

def __setstate__(self, state):
    super(Queue, self).__setstate__(state[:-1])
    self.size = state[-1]

With that fixed it actually worked! I then fixed pickling for Broker class and I was able to undo all the changes where I previously set broker to None. It appears to be working, at least with Redis broker I tested with.

I was not able to do away with apps.check_apps_ready() check.

You can see my changes in this commit. I can send a pull request if anyone is interested.

@mohmyo
Copy link

mohmyo commented Oct 15, 2020

Great work @ihuk, can you try with windows? known that there is only one type to start a process which is spawn

@ihuk
Copy link

ihuk commented Oct 15, 2020

Hey @mohmyo, I don't have Windows on any of my machines right now, sorry. I see that some brokers are probably not pickled correctly. I'll try to fix those as well.

@ihuk
Copy link

ihuk commented Oct 16, 2020

I updated AWS and Mongo brokers. You can see the changes in commit 507636716856203d096438653419b1fe0054dab5.

Let me know if you think I should send a pull request.

@Koed00
Copy link
Owner

Koed00 commented Oct 18, 2020

@ihuk if you think this will fix the issue, please make a pr. Thanks for the work.

@ihuk
Copy link

ihuk commented Oct 19, 2020

Good luck with this!

@mohmyo Colleague from work ran a quick test on Windows. It appears to be working.

@mohmyo
Copy link

mohmyo commented Oct 19, 2020

@ihuk This is a good sign!, I will try it too and confirm.
Thank you!

@ihuk
Copy link

ihuk commented Oct 19, 2020

@ihuk if you think this will fix the issue, please make a pr. Thanks for the work.

Pull request sent.

Koed00 added a commit that referenced this issue Oct 20, 2020
@Koed00
Copy link
Owner

Koed00 commented Oct 20, 2020

Merged to master - does anyone want to give the master branch a try to see if it fixes the issue for them?

@mohmyo
Copy link

mohmyo commented Oct 21, 2020

Tested on old Windows 10 version ( about 3 years old ) with python 3.7.4:

Just created a new project, set broker the ormagaisnt defaultdb and hit python.exe .\manage.py qcluster.

There is no errors at all, TypeError: can't pickle _thread.lock objects isn't there as it shows up with the 1.3.3 version ,workers started normally with no errors, but I didn't push tasks or somethings else to the workers though.

Great job! @ihuk

I will test it on my personal machine later which is up to date, I will try python 3.8 too.

@biozz
Copy link

biozz commented Oct 21, 2020

Tested master branch on MacOS X 10.15.7, python 3.8.2 (pyenv, framework version), used redis broker and django 3.1.

I went though the quickstart section:qcluster, starts and working, there were no errors. I also tried an example task from docs and it is also working, the results are displayed in django admin.

Thank you @ihuk! I was not able to properly get started with django-q for a month because of that issue.

@ihuk
Copy link

ihuk commented Oct 21, 2020

I took a quick look at some other issues and I believe this will also fix #389, #390 and #408.

@Koed00
Copy link
Owner

Koed00 commented Oct 21, 2020

Awesome job - I still have some PR's to review but I'll try to get this out soon.

@ihuk
Copy link

ihuk commented Oct 21, 2020

No worries. Let me know if you run into any problems.

jannikbook added a commit to eckynde/sprinklercontrol that referenced this issue Oct 24, 2020
This fixes Koed00/django-q#424.
As soon as the PR (#482) is incorporated in a new release, the official release can be used again
@zhunus1
Copy link

zhunus1 commented Oct 26, 2020

Hello! Have just installed on Windows 10 Django 3 and Python 3.8.3
The same error when running python manage.py qcluster
Traceback (most recent call last):
File "manage.py", line 23, in
main()
File "manage.py", line 19, in main
execute_from_command_line(sys.argv)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management_init_.py", line 401, in execute_from_command_line
utility.execute()
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management_init_.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management\base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management\base.py", line 371, in execute
output = self.handle(*args, **options)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
q.start()
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django_q\cluster.py", line 67, in start
self.sentinel.start()
File "c:\python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "c:\python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "c:\python38\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "c:\python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "c:\python38\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object
Traceback (most recent call last):
File "", line 1, in
File "c:\python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "c:\python38\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

@mohmyo
Copy link

mohmyo commented Oct 26, 2020

Tested on old Windows 10 version ( about 3 years old ) with python 3.7.4:
Just created a new project, set broker the ormagaisnt defaultdb and hit python.exe .\manage.py qcluster.

There is no errors at all, TypeError: can't pickle _thread.lock objects isn't there as it shows up with the 1.3.3 version ,workers started normally with no errors, but I didn't push tasks or somethings else to the workers though.

Great job! @ihuk

I will test it on my personal machine later which is up to date, I will try python 3.8 too.

Tested with the same old win 10 ver but with python 3.8.6 this time with the steps done before, no errors

@zhunus1
Copy link

zhunus1 commented Oct 26, 2020

Hello! Have just installed on Windows 10 Django 3 and Python 3.8.3
The same error when running python manage.py qcluster
Traceback (most recent call last):
File "manage.py", line 23, in
main()
File "manage.py", line 19, in main
execute_from_command_line(sys.argv)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management__init__.py", line 401, in execute_from_command_line
utility.execute()
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management\base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django\core\management\base.py", line 371, in execute
output = self.handle(*args, **options)
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
q.start()
File "C:\Users\zhunu.virtualenvs\pushes-fZx5IDOs\lib\site-packages\django_q\cluster.py", line 67, in start
self.sentinel.start()
File "c:\python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "c:\python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "c:\python38\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "c:\python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "c:\python38\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object
Traceback (most recent call last):
File "", line 1, in
File "c:\python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "c:\python38\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

YEAH! I have solved this problem simply by installing this module directly from guthib source: sudo pip install git+https://github.com/myuser/myproject.git#egg=myproject
And everything worked for me.
PS. Windows10 + Django 3 + Python 3.8.
Best regards. Zhunus

@Koed00
Copy link
Owner

Koed00 commented Oct 26, 2020

Fixed by #482 apparently

@Koed00 Koed00 closed this as completed Oct 26, 2020
jannikbook added a commit to eckynde/sprinklercontrol that referenced this issue Oct 27, 2020
This fixes Koed00/django-q#424.
As soon as the PR (#482) is incorporated in a new release, the official release can be used again
@sandeepsinghsengar
Copy link

Tested on old Windows 10 version ( about 3 years old ) with python 3.7.4:

Just created a new project, set broker the ormagaisnt defaultdb and hit python.exe .\manage.py qcluster.

There is no errors at all, TypeError: can't pickle _thread.lock objects isn't there as it shows up with the 1.3.3 version ,workers started normally with no errors, but I didn't push tasks or somethings else to the workers though.

Great job! @ihuk

I will test it on my personal machine later which is up to date, I will try python 3.8 too.

Hi @mohmyo @ihuk

I am getting the same error TypeError: can't pickle _thread.lock objects. The configuration is as follows:

Windows 10,
Tensor flow-gpu: 2.2.0, 2.3.1,
Python: 3.7.3, 3.8.3

I tried with different versions of tensor flow and python, but getting the same error. May you please tell your steps to resolve this error. Stucked in between of same error from the last 3 days. Your help will be highly appreciated.

@mohmyo
Copy link

mohmyo commented Nov 15, 2020

@sandeepsinghsengar are you using the last version 1.3.4? make sure that you have this one installed as it has the fix we were talikng about earlier here.

@sandeepsinghsengar
Copy link

sandeepsinghsengar commented Nov 15, 2020

1.3.4 version for? I am not using Django. Simple, I am trying to implement using MRI data for that using Anaconda prompt.

@mohmyo
Copy link

mohmyo commented Nov 15, 2020

This issue is for django-q package, nothing else

@sandeepsinghsengar
Copy link

Okay. But I am not using Django, then also getting the same error.

@mohmyo
Copy link

mohmyo commented Nov 15, 2020

This error is so generic it can be raised by any other pakage, see which class caused this at your error log trace and search for it.
You are in the wrong place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests