-
-
Notifications
You must be signed in to change notification settings - Fork 49
API clarification #37
Comments
Forgot to mention I also posted recently on the mailing list at https://groups.google.com/forum/#!topic/julia-users/td8QBHHjwkk, but that was before I discovered the existence of ODE.jl. |
Spontaneously I would say that it should be possible, although it might not give the best performance at the moment. But it would be a great example and performance test! Regarding your points:
|
I have created an example for using a custom type with ode45, which can be found in this gist. |
Looking at the question at SO I guess your third point concerns the split step technique? In that case I think it is not necessary to do the FFT in the RHS function. You would rather do the FFT between the solution steps. Let's say the RHS function of step 1 is called
The FFT and phase factor multiplication should probably be done in-place. Note that this will not give you an (overall) adaptive scheme - the step-size One aspect of the API, which would be nice in this context but is not implemented, is the keyword |
Well actually, so far I am just implementing the adaptive Runge-Kutta which I already had working in Fortran to Julia. My SO question concerned other (faster) ways of doing the integration. The answer I got there about the operator splitting might not actually be valid for this problem, since on the rhs I have the time dependent term F(x,y) exp(-i \omega_p t) in the second equation. |
In that case it should basically Just Work ... |
Ok, done. So now we have a fully working Julia implementation of my pattern formation problem and a Fortran 90 one as well. Here is the IJulia notebook: and here the gist: Caveats:
|
Very nice! Would be even cooler if it would work with stock ODE.jl ;-) But I think we will get there. Regarding your caveats:
which will create temporary copies ( |
May I ask why you switched to |
The decision is twofold. On one side, being a higher order RK method, it should take larger steps, resulting in fewer function evaluations (which in my case are really costly). Weather it will be faster or not in practice remains to be seen once I get a working version. Secondly, dense output is easier to imlpement in |
I see. It will be interesting to see a comparison of the performance. (I guess you know that dense output for How large is actually the difference between the F90 and the Julia code? |
I have a simple benchmark for one of my use cases (the solution of the two-body Kepler problem):
The Julia values are a few months old actually and I will re-test it with the current head today. I might also do a comparison with |
@helgee Nice! It's impressive that Julia is within an order of magnitude of gfortran even with -Ofast. Do you have the benchmarking code posted somewhere? Would be interesting to do a legibility comparison too ;) |
@tlycken Yup, right here https://github.com/helgee/dop-julia EDIT: Unfortunately I cannot reproduce the results and it is currently much slower. |
@helgee I modified your benchmark a little, simply because I couldn't figure out a reliable way to measure many evaluations in julia separately, so these timings are total times over a 10 000 iteration loop:
So there's still some way to go to beat |
I was wondering, would the Sundials interface support my defined type based on complex matrices? |
@berceanu : Unfortunately not. You will need to reshape the matrices into vectors and join them. Moreover I am not sure if you can actually use complex vectors with Sundials ... @tlycken, @helgee : I have updated my "benchmark" in this gist to include |
@acroy Interesting... We could probably get some more information from these benchmarks simply by re-ordering some things and piggy-backing on the more verbose output of
to
That should give us both memory allocation and gc info "for free", with minimal changes to the output format. If gc turns out to be a big factor, maybe we should consider calling it explicitly between runs, and somehow not count that time? After all, solving the same problem 10 times in a row isn't really a credible scenario... |
@tlycken : I have updated the results using |
@acroy: nice! I don't think this looks so suspicious, though. Sure, the higher-order method takes about half as many steps, but it will also do much more function evaluations per steps. I can't say that this is off without spending some time profiling - time which I unfortunately need to spend on finishing up my master's thesis, at the moment... |
@tlycken: I think you are right. Most of the time difference comes from the different number of functions calls (I just counted the calls in |
Definately OTing, all ok on my side, tnx a lot for you help! |
I would like to solve the 2 coupled time-dependent PDEs from http://scicomp.stackexchange.com/questions/11772/numerical-approach-to-coupled-nonlinear-pdes-with-time-dependence/11782?noredirect=1#comment18452_11782
using ode45. I already have a working Fortran 90 code for this, based on the ode45 presented in Numerical Recipes. The plan is to add this to the examples section once finished.
I find the last test in https://github.com/JuliaLang/ODE.jl/blob/master/test/runtests.jl to be a good starting point, as it has a system of 2 coupled equations. However, my case is more complicated because:
v
andw
(called psiX and psiC in my problem) are themselves (complex) matrices, defined on a finite grid.The main issue here: is it possible to use the current API to implement this problem? If not, I am willing to contribute in improving the API, but would appreciate some guidance.
The text was updated successfully, but these errors were encountered: