Gerhard Niklasch on Wed, 4 Apr 2001 20:05:35 +0200 (MEST) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: parallelization |
In response to: > Message-ID: <21C10221747AD3118E9D00E0293851AA04307342@exmail20.usma.army.mil> > From: "Mills, D. Dr. MATH" <ad3943@exmail.usma.army.mil> > To: pari-users@list.cr.yp.to > Date: Wed, 4 Apr 2001 13:19:41 -0400 > > I'm using PARI on a supercomputer, and am curious to know any programming > "tricks" or "tips" to aid me in efficiently parallelizing my programs. What kind of platform and operating system (and compiler)? You can always subdivide the computational tasks and run several independent processes each on its own cpu. I have been using this approach many times over the years on a variety of problems. Whether or not you can efficiently exploit any hardware vector capabilities of your cpus will depend on how good your compiler and optimizer are on PARI's inner loops. Finding places where suitable hints or pragmas could be added is likely to be a non- trivial task... Compiling for profiling and making measurements may help to find out what parts of the code are exercised by your programs. libpari currently will not lend itself to multithreading within one and the same process (since the central data structures are accessed through global variables which exist only once) -- not without very devious hacks involving dlopen() and private symbol tables, which I'd hesitate to delve into. (And good multithread-hot malloc() implementations are a story in and of themselves. Hoard is an interesting one, see www.hoard.org. But in the context of PARI, this would be very difficult to exploit at present, unless the problem is structured in such a way that you can have the main thread executing in libpari and other threads doing other things which do not depend on libpari in any way, with proper synchronization around any shared data structure accesses.) Hope this helps, Gerhard