Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introspection is slow and causes a significant memory leak #13057

Closed
sagetrac-bober mannequin opened this issue May 29, 2012 · 33 comments
Closed

introspection is slow and causes a significant memory leak #13057

sagetrac-bober mannequin opened this issue May 29, 2012 · 33 comments

Comments

@sagetrac-bober
Copy link
Mannequin

sagetrac-bober mannequin commented May 29, 2012

Introspection used to be more or less immediate, and now something like

sage: gcd?<ENTER>

in the sage command line takes a few seconds.

"used to be" means that this slowdown happened sometime between 4.7.1 and 5.0, and probably between 4.8 and 5.0.

I don't know right now if we have the same regression in the notebook.

CC: @hivert

Component: documentation

Keywords: regression introspection

Author: John Palmieri

Reviewer: Keshav Kini

Merged: sage-5.1.beta5

Issue created by migration from https://trac.sagemath.org/ticket/13057

@sagetrac-bober sagetrac-bober mannequin added this to the sage-5.1 milestone May 29, 2012
@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 29, 2012

comment:1

As another point of reference: If I run sage -ipython and then try str?, I get the docstring immediately. If I then do

from sage.all import *

the same thing now takes a second or two.

@simon-king-jena
Copy link
Member

comment:2

I did

sage: from sage.misc.sageinspect import sage_getdoc
sage: %prun L = sage_getdoc(gcd)

which, I thought, should reveal what is happening under the hood.

However, I don't really understand the outcome:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.286    0.286    1.269    1.269 intersphinx.py:74(read_inventory_v2)
    20751    0.272    0.000    0.576    0.000 intersphinx.py:90(split_lines)
    28708    0.215    0.000    0.327    0.000 posixpath.py:60(join)
        2    0.179    0.089    0.179    0.090 {cPickle.dump}
       73    0.123    0.002    0.123    0.002 {method 'search' of '_sre.SRE_Pattern' objects}
        1    0.108    0.108    0.536    0.536 intersphinx.py:53(read_inventory_v1)
    36194    0.099    0.000    0.099    0.000 {method 'setdefault' of 'dict' objects}
81003/79947    0.097    0.000    0.100    0.000 {len}
    20758    0.095    0.000    0.246    0.000 {method 'decode' of 'str' objects}
    26193    0.093    0.000    0.093    0.000 {_codecs.utf_8_decode}
 1461/328    0.089    0.000    0.356    0.001 sre_parse.py:379(_parse)
     7914    0.089    0.000    0.252    0.000 codecs.py:503(readline)
    20238    0.086    0.000    0.125    0.000 sre_parse.py:182(__next)
2972/1056    0.084    0.000    0.093    0.000 sre_parse.py:140(getwidth)
    28718    0.077    0.000    0.077    0.000 {method 'split' of 'unicode' objects}
    20756    0.076    0.000    0.151    0.000 utf_8.py:15(decode)
     5444    0.066    0.000    0.115    0.000 codecs.py:424(read)
       29    0.064    0.002    0.118    0.004 sre_compile.py:301(_optimize_unicode)
 2344/286    0.060    0.000    0.259    0.001 sre_compile.py:32(_compile)
    28665    0.059    0.000    0.059    0.000 {method 'startswith' of 'unicode' objects}
    18416    0.057    0.000    0.057    0.000 {method 'read' of 'file' objects}
    26193    0.053    0.000    0.053    0.000 {method 'endswith' of 'unicode' objects}
    28717    0.052    0.000    0.052    0.000 {method 'endswith' of 'str' objects}
    41439    0.049    0.000    0.049    0.000 {method 'append' of 'list' objects}
    28826    0.048    0.000    0.048    0.000 {method 'rstrip' of 'unicode' objects}
    17769    0.045    0.000    0.154    0.000 sre_parse.py:201(get)
    21097    0.045    0.000    0.045    0.000 {method 'find' of 'str' objects}
      688    0.034    0.000    0.174    0.000 sre_compile.py:207(_optimize_charset)
     7914    0.034    0.000    0.286    0.000 codecs.py:612(next)
    10021    0.030    0.000    0.046    0.000 sre_parse.py:130(__getitem__)
        1    0.027    0.027    0.115    0.115 pickle.py:845(load)
      266    0.021    0.000    0.025    0.000 sre_compile.py:258(_mk_bitmap)
    13462    0.021    0.000    0.021    0.000 {isinstance}
     2766    0.017    0.000    0.030    0.000 pickle.py:929(load_binint1)
     5499    0.015    0.000    0.015    0.000 {method 'splitlines' of 'unicode' objects}
  918/288    0.015    0.000    0.361    0.001 sre_parse.py:301(_parse_sub)
     7445    0.014    0.000    0.028    0.000 sre_parse.py:195(match)
       23    0.013    0.001    0.013    0.001 {built-in method decompress}
     4694    0.012    0.000    0.018    0.000 sre_parse.py:138(append)
     1379    0.011    0.000    0.019    0.000 pickle.py:1173(load_long_binput)
     8237    0.010    0.000    0.010    0.000 {ord}
       66    0.009    0.000    0.015    0.000 optparse.py:1007(add_option)
     3832    0.009    0.000    0.013    0.000 sre_parse.py:126(__len__)
     3232    0.009    0.000    0.013    0.000 token.py:43(__hash__)
        2    0.009    0.004    0.009    0.004 {posix.popen}
      688    0.008    0.000    0.186    0.000 sre_compile.py:178(_compile_charset)
      934    0.008    0.000    0.801    0.001 re.py:228(_compile)
      286    0.008    0.000    0.150    0.001 sre_compile.py:361(_compile_info)
        5    0.008    0.002    0.028    0.006 style.py:17(__new__)
  1485/27    0.008    0.000    0.011    0.000 nodes.py:189(_fast_traverse)
     5253    0.008    0.000    0.008    0.000 {min}
      269    0.007    0.000    0.007    0.000 {method 'update' of 'dict' objects}
     4114    0.007    0.000    0.011    0.000 {method 'get' of 'dict' objects}
       13    0.006    0.000    0.345    0.027 __init__.py:10(<module>)
        1    0.006    0.006    0.011    0.011 nodes.py:20(<module>)
        1    0.005    0.005    0.494    0.494 application.py:12(<module>)
        5    0.005    0.001    0.327    0.065 application.py:351(add_node)
    19/18    0.005    0.000    0.289    0.016 {__import__}
        1    0.005    0.005    0.227    0.227 states.py:101(<module>)
      588    0.005    0.000    0.005    0.000 re.py:206(escape)
     2388    0.004    0.000    0.004    0.000 sre_parse.py:90(__init__)
     3206    0.004    0.000    0.004    0.000 sre_compile.py:24(_identityfunction)
      688    0.004    0.000    0.048    0.000 statemachine.py:690(make_transition)
      732    0.004    0.000    0.008    0.000 sre_compile.py:354(_simple)
      286    0.004    0.000    0.789    0.003 sre_compile.py:495(compile)
     3232    0.004    0.000    0.004    0.000 {hash}
      248    0.004    0.000    0.004    0.000 {built-in method __new__ of type object at 0x7fa2a2015f60}
      507    0.004    0.000    0.006    0.000 sre_parse.py:225(_class_escape)
  288/286    0.003    0.000    0.371    0.001 sre_parse.py:663(parse)
     1919    0.003    0.000    0.003    0.000 {marshal.loads}
      614    0.003    0.000    0.005    0.000 sre_parse.py:257(_escape)
     1841    0.003    0.000    0.003    0.000 {repr}
      120    0.003    0.000    0.052    0.000 statemachine.py:723(make_transitions)
      917    0.003    0.000    0.792    0.001 re.py:188(compile)
        3    0.003    0.001    0.059    0.020 __init__.py:11(<module>)
      350    0.003    0.000    0.004    0.000 token.py:15(split)
        3    0.003    0.001    0.097    0.032 html.py:10(<module>)
     2211    0.003    0.000    0.003    0.000 {method 'extend' of 'list' objects}
     1782    0.003    0.000    0.003    0.000 {setattr}
     1065    0.002    0.000    0.003    0.000 {hasattr}
      861    0.002    0.000    0.002    0.000 pickle.py:1006(load_tuple2)
      286    0.002    0.000    0.411    0.001 sre_compile.py:480(_code)
        1    0.002    0.002    0.019    0.019 agile.py:10(<module>)
        1    0.002    0.002    0.238    0.238 __init__.py:68(<module>)
     1040    0.002    0.000    0.002    0.000 {method 'join' of 'str' objects}
      345    0.002    0.000    0.003    0.000 pickle.py:933(load_binint2)
        1    0.002    0.002    0.310    0.310 latex.py:13(<module>)
      572    0.002    0.000    0.003    0.000 sre_compile.py:474(isstring)
     1202    0.002    0.000    0.002    0.000 {getattr}
      468    0.002    0.000    0.003    0.000 pickle.py:1014(load_empty_list)
        4    0.002    0.000    0.002    0.000 {posix.remove}
      594    0.002    0.000    0.002    0.000 {range}
      233    0.002    0.000    0.004    0.000 pickle.py:1185(load_appends)
      203    0.002    0.000    0.003    0.000 other.py:986(_shortened)
        1    0.002    0.002    0.168    0.168 environment.py:10(<module>)
...
        1    0.000    0.000    0.000    0.000 references.py:7(<module>)
        1    0.000    0.000    0.000    0.000 universal.py:13(<module>)
        1    0.000    0.000    3.304    3.304 sagedoc.py:333(format)
        8    0.000    0.000    0.000    0.000 {_bisect.insort}
       70    0.000    0.000    0.000    0.000 __init__.py:41(__init__)
...

So, it seems much time is spent in the function "format". Has there been a change to that function recently? Or am I misinterpreting the figures (I am not sure if "cumtime" and "percall" include the time for calling sub-functions).

@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 29, 2012

comment:3

So, it seems much time is spent in the function "format". Has there been a change to that function recently? Or am I misinterpreting the figures (I am not sure if "cumtime" and "percall" include the time for calling sub-functions).

I'm not entirely sure what the output means either. This reminded me that there is some
other information that I should have added, though. gcd? seems to take a good deal
longer than sage.misc.sageinspect.sage_getdoc(gcd), which already takes way too long. (On my machine is it about 2 seconds for the question mark, and about .5 seconds for the direct function call.)

I had thought that most of the time taken by sage.misc.sageinspect.sage_getdoc might be taken up by sage.misc.sagedoc.detex, but that doesn't show up in the above at all, and I don't know what explains the extra 1.5 seconds.

@kiwifb
Copy link
Member

kiwifb commented May 29, 2012

comment:4

Could you produce a similar report for sage-4.8? If the slow down happened in sage-5.0 the most likely candidate would be python itself.

@ppurka
Copy link
Member

ppurka commented May 29, 2012

comment:5

Here are the various timings:

~» su -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
Password: 
~» Installations/sage-4.8.good/sage -q
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)
CPU times: user 0.54 s, sys: 0.06 s, total: 0.60 s
Wall time: 3.09 s
sage: 
Exiting Sage (CPU time 0m0.66s, Wall time 0m14.59s).
~» Installations/sage-5.0.beta2/sage -q
Loading Sage library. Current Mercurial branch is: main-backup
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)
CPU times: user 0.50 s, sys: 0.05 s, total: 0.56 s
Wall time: 1.61 s
sage: 
Exiting Sage (CPU time 0m0.61s, Wall time 0m11.66s).
~» Installations/sage-5.0.rc0.good/sage -q
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)
CPU times: user 1.55 s, sys: 0.09 s, total: 1.64 s
Wall time: 2.93 s
sage: 
Exiting Sage (CPU time 0m1.67s, Wall time 0m10.45s).
~» Installations/sage-5.1beta0/sage -q
Loading Sage library. Current Mercurial branch is: trac
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)
CPU times: user 1.06 s, sys: 0.08 s, total: 1.14 s
Wall time: 2.03 s
sage: 
Exiting Sage (CPU time 0m1.19s, Wall time 0m10.42s).

@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 29, 2012

comment:6

Replying to @ppurka:

Here are the various timings:

~» su -c 'sync; echo 3 > /proc/sys/vm/drop_caches'

Dropping the caches first is not a good way of timing this. The problem is not with disk access, but something else, so a better test is to get the docstring twice, and use the second timing.

I now think that the change is should be in either 5.0beta7 or 5.0beta8...

@ppurka
Copy link
Member

ppurka commented May 29, 2012

comment:7

Ok. Here they are again. Haven't dropped caches this time. The sage libraries are already loaded so, the sage -q gives the sage prompt quickly.

~» Installations/sage-5.1beta0/sage -q
Loading Sage library. Current Mercurial branch is: trac
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)                   
CPU times: user 1.02 s, sys: 0.05 s, total: 1.07 s
Wall time: 1.11 s
sage: 
Exiting Sage (CPU time 0m1.10s, Wall time 0m6.96s).
~» Installations/sage-5.0.rc0.good/sage -q
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)                   
CPU times: user 1.02 s, sys: 0.05 s, total: 1.07 s
Wall time: 1.09 s
sage: 
Exiting Sage (CPU time 0m1.10s, Wall time 0m5.61s).
~» Installations/sage-5.0.beta2/sage -q
Loading Sage library. Current Mercurial branch is: main-backup
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)                   
CPU times: user 0.44 s, sys: 0.03 s, total: 0.48 s
Wall time: 0.50 s
sage: 
Exiting Sage (CPU time 0m0.52s, Wall time 0m5.43s).
~» Installations/sage-4.8.good/sage -q
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)                   
CPU times: user 0.48 s, sys: 0.04 s, total: 0.52 s
Wall time: 0.54 s
sage: 
Exiting Sage (CPU time 0m0.56s, Wall time 0m4.47s).

@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 29, 2012

comment:8

So it appears that the regression happened somewhere in 5.0beta8. #9128 seems like it touched a lot of stuff related to this, so it is a possible candidate for the cause of the regression.

(Adding hivert to cc, since he was the author on that ticket.)

@kiwifb
Copy link
Member

kiwifb commented May 29, 2012

comment:9

The sphinx ticket. That should be testable starting from beta7 and adding this single ticket.

@ppurka
Copy link
Member

ppurka commented May 29, 2012

comment:10

There doesn't seem to be much of a difference from sage-4.7 (it is a different machine though)

~> ./sage-4.7/sage -q
sage: from sage.misc.sageinspect import sage_getd
sage_getdef  sage_getdoc  
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)
CPU times: user 0.39 s, sys: 0.09 s, total: 0.48 s
Wall time: 0.53 s

This test on sage-4.7 is done on a

model name	: Intel(R) Xeon(R) CPU           X5460  @ 3.16GHz

a virtual machine with 4 cpu cores and with 20G of memory.

The earlier tests were done on my laptop

model name	: Intel(R) Core(TM) i5 CPU       M 460  @ 2.53GH

with 2 cores (4 with HT) and with 4G of memory.

@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 29, 2012

comment:11

Replying to @kiwifb:

The sphinx ticket. That should be testable starting from beta7 and adding this single ticket.

Yes, I just did that. The problem definitely comes from #9128.

@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 29, 2012

comment:12

Replying to @ppurka:

There doesn't seem to be much of a difference from sage-4.7 (it is a different machine though)

~> ./sage-4.7/sage -q
sage: from sage.misc.sageinspect import sage_getd
sage_getdef  sage_getdoc  
sage: from sage.misc.sageinspect import sage_getdoc
sage: %time L = sage_getdoc(gcd)
CPU times: user 0.39 s, sys: 0.09 s, total: 0.48 s
Wall time: 0.53 s

This test on sage-4.7 is done on a

model name	: Intel(R) Xeon(R) CPU           X5460  @ 3.16GHz

a virtual machine with 4 cpu cores and with 20G of memory.

The earlier tests were done on my laptop

model name	: Intel(R) Core(TM) i5 CPU       M 460  @ 2.53GH

with 2 cores (4 with HT) and with 4G of memory.

Try running the sage_getdoc function a few times in a row with the same input on 4.7 and 5.0 (or just using the question mark) and you should see the difference. You really need to do it a few times in a row, though. The first time may be slow because it is reading from disk (a "warm cache" from starting up sage is not enough to prevent this) but it should not remain slow.

@ppurka
Copy link
Member

ppurka commented May 29, 2012

comment:13

You are right. There is a slowdown indeed.

~» Installations/sage-5.1beta0/sage -q
Loading Sage library. Current Mercurial branch is: trac
sage: from sage.misc.sageinspect import sage_getdoc
sage: timeit('L = sage_getdoc(gcd)')
5 loops, best of 3: 624 ms per loop
sage: 
Exiting Sage (CPU time 0m13.41s, Wall time 0m30.68s).
~» Installations/sage-5.0.rc0.good/sage -q
sage: from sage.misc.sageinspect import sage_getdoc
sage: timeit('L = sage_getdoc(gcd)')               
5 loops, best of 3: 600 ms per loop
sage: 
Exiting Sage (CPU time 0m13.04s, Wall time 0m18.14s).

~» Installations/sage-5.0.beta2/sage -q
Loading Sage library. Current Mercurial branch is: main-backup
sage: from sage.misc.sageinspect import sage_getdoc
sage: timeit('L = sage_getdoc(gcd)')               
5 loops, best of 3: 18.2 ms per loop
sage: 
Exiting Sage (CPU time 0m0.86s, Wall time 0m5.01s).

~> ./sage-4.7/sage -q
sage: from sage.misc.sageinspect import sage_getdoc
sage: timeit('L = sage_getdoc(gcd)')
5 loops, best of 3: 17.8 ms per loop

@ppurka
Copy link
Member

ppurka commented May 29, 2012

comment:14

Just tested on sage-5.0beta7. The patch responsible for the slowdown is the second one from that ticket:
https://github.com/sagemath/sage-prod/files/10649422/trac_9128-sphinx_links_all-fh.patch.gz

@jhpalmieri
Copy link
Member

comment:15

Perhaps it's the intersphinx invocation? Maybe we should disable that during introspection, and maybe the same for the dangling links. Something like this?

diff --git a/doc/common/conf.py b/doc/common/conf.py
--- a/doc/common/conf.py
+++ b/doc/common/conf.py
@@ -608,14 +608,19 @@ def setup(app):
     app.connect('autodoc-process-docstring', process_inherited)
     app.connect('autodoc-skip-member', skip_member)
 
-    app.add_config_value('intersphinx_mapping', {}, True)
-    app.add_config_value('intersphinx_cache_limit', 5, False)
-    # We do *not* fully initialize intersphinx since we call it by hand
-    # in find_sage_dangling_links.
-    #   app.connect('missing-reference', missing_reference)
-    app.connect('missing-reference', find_sage_dangling_links)
-    import sphinx.ext.intersphinx
-    app.connect('builder-inited', set_intersphinx_mappings)
-    app.connect('builder-inited', sphinx.ext.intersphinx.load_mappings)
+    # When building the standard docs, app.srcdir is set to SAGE_DOC +
+    # 'LANGUAGE/DOCNAME', but when doing introspection, app.srcdir is
+    # set to a temporary directory.  We don't want to use intersphinx,
+    # etc., when doing introspection.
+    if app.srcdir.startswith(SAGE_DOC):
+        app.add_config_value('intersphinx_mapping', {}, True)
+        app.add_config_value('intersphinx_cache_limit', 5, False)
+        # We do *not* fully initialize intersphinx since we call it by hand
+        # in find_sage_dangling_links.
+        #   app.connect('missing-reference', missing_reference)
+        app.connect('missing-reference', find_sage_dangling_links)
+        import sphinx.ext.intersphinx
+        app.connect('builder-inited', set_intersphinx_mappings)
+        app.connect('builder-inited', sphinx.ext.intersphinx.load_mappings)
     app.connect('builder-inited', nitpick_patch_config)

@jhpalmieri
Copy link
Member

comment:16

For me, without this change:

sage: from sage.misc.sageinspect import sage_getdoc
sage: timeit('L = sage_getdoc(gcd)')
5 loops, best of 3: 888 ms per loop

With the change:

sage: from sage.misc.sageinspect import sage_getdoc
sage: timeit('L = sage_getdoc(gcd)')
5 loops, best of 3: 26 ms per loop

@jhpalmieri
Copy link
Member

Attachment: trac_13057-no-intersphinx.patch.gz

@jhpalmieri
Copy link
Member

comment:17

Here is essentially the above patch (although I've now also included the last line in the "if" block). Please test it from the command line and in the notebook, and you should probably also build the regular documentation and make sure it still looks okay.

When I was testing this, I added a line to try to verify that app.srcdir is as I'm claiming in the patch (this applies on top of the attached patch):

diff --git a/doc/common/conf.py b/doc/common/conf.py
--- a/doc/common/conf.py
+++ b/doc/common/conf.py
@@ -608,6 +608,8 @@ def setup(app):
     app.connect('autodoc-process-docstring', process_inherited)
     app.connect('autodoc-skip-member', skip_member)
 
+    print "************* %s **************" % app.srcdir
+
     # When building the standard docs, app.srcdir is set to SAGE_DOC +
     # 'LANGUAGE/DOCNAME', but when doing introspection, app.srcdir is
     # set to a temporary directory.  We don't want to use intersphinx,

Florent: is this a good solution (i.e., not using intersphinx, etc., when doing introspection)?

@jhpalmieri
Copy link
Member

Author: John Palmieri

@kini
Copy link
Contributor

kini commented May 31, 2012

comment:18

As I just reported on sage-devel, #9128 apparently also introduced a pretty large memory leak - every docstring lookup with "?" in the console leaks 56 MB of memory. This patch fixes the leak.

@kini kini changed the title introspection is slow introspection is slow and causes a significant memory leak May 31, 2012
@sagetrac-bober
Copy link
Mannequin Author

sagetrac-bober mannequin commented May 31, 2012

comment:19

This patch seems to take care of my complaints nicely. I don't know what is actually going on with it, though, so I hope that someone who does know will give it a review.

The 56 MB memory leak is worrying, though, and we should figure out why that was happening. Does someone know?

@kini
Copy link
Contributor

kini commented Jun 3, 2012

comment:20

Should this ticket be priority critical? Users use introspection a lot, in my experience. This is not just some obscure memory leak.

@jpflori
Copy link
Contributor

jpflori commented Jun 6, 2012

comment:21

Replying to @kini:

Should this ticket be priority critical? Users use introspection a lot, in my experience. This is not just some obscure memory leak.

If this also solves the fact that building the doc now uses a HUGE amount of memory, I'd vote for this ticket being critical.

@jpflori
Copy link
Contributor

jpflori commented Jun 6, 2012

comment:22

Nevermind, the doc building problem is already present in 4.8 so is completely unrelated, I mixed two different threads on sage-devel.
Sorry for the noise.

@jhpalmieri
Copy link
Member

comment:23

This is related to docbuilding using a lot of memory, but it doesn't solve that problem. It suggests that the problem comes from using intersphinx, and this might help someone track down the exact issue. Is there a ticket open for this issue?

@jhpalmieri
Copy link
Member

comment:24

I agree with Keshav: this ticket should have a higher priority. Any possible reviewers?

@ppurka
Copy link
Member

ppurka commented Jun 14, 2012

comment:25

I can confirm that this patch works, and fixes all the issues raised here. But I know nothing about sphinx, so I am not sure about setting it to positive review.

@hivert
Copy link
Contributor

hivert commented Jun 15, 2012

comment:26

Replying to @ppurka:

I can confirm that this patch works, and fixes all the issues raised here. But I know nothing about sphinx, so I am not sure about setting it to positive review.

I wan't to have a close look at this one (understanding why there is a such a huge slowdown/memleak). It seems that the intersphinx database is recreated at each run (which is very bad wrt speed, caching the result should be doable) and not garbage collected (which is even worse). But I can't manage to find the time to do it. I is clearly critical or blocker. I'm soory, I didn't find the time to even apply the patch. Does it deactivate intersphinx when using ?? in the notebook ? If so, I think we should look for a better solution because not calling find_sage_dangling_links breaks hundreds of links (see #9128).

Sorry that my machine is very fast and that I didn't notice the problem when writing #9128.

Cheers,

Florent

@jhpalmieri
Copy link
Member

comment:27

Before applying the patch, executing sage.algebras.group_algebra_new.GroupAlgebra.one_basis? (or the same but with ??) in the notebook displays something which looks like a link, but is broken, at least for me running Chrome or Firefox on OS X 10.7. The link is of the form

localhost:8000/FULL_PATH_TO_REFERENCE_MANUAL/sage/categories/algebras_with_basis.html#sage.categories.algebras_with_basis.AlgebrasWithBasis.ParentMethods.one_basis

It should probably be of the form

localhost:8000/doc/static/reference/sage/categories/...

That is, the desired behavior is currently broken.

After applying the patch, there is no link at all.

So I would propose that we use this fix for now, since it speeds things up, prevents a memory leak, and doesn't disable anything which currently works. Then later when you have time, we can work on tracking down the memory leak, reinstating intersphinx in the notebook, and fixing the links so that they actually work. (Just fixing the link is, I think, currently unacceptable given the slowdown and the memory leak issues.)

@kini
Copy link
Contributor

kini commented Jun 15, 2012

Reviewer: Keshav Kini

@kini
Copy link
Contributor

kini commented Jun 15, 2012

comment:28

The leak is ridiculously bad. I'm going to give this positive review - whatever else this patch might do, it does fix that, at least.

@jhpalmieri
Copy link
Member

comment:29

See #13127 and #13128 for follow-ups: fixing the memory leak, and fixing the links in introspection combined with intersphinx.

@jdemeyer
Copy link
Contributor

Merged: sage-5.1.beta5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants