Saturday 2 November 2013

The e-zed runtime ...

I thought i'd have a quick look at hacking on parallella today - I thought about trying a multi-core algorithm again but in the end I decided I didn't want to code up all that boilerplate and worked on the elf-loader code again.

The last time I looked at it I had tenatively started hacking up an api to support multi-core loading but it was pretty pants and so I left it there.

This morning i started afresh, and it worked out much simpler than I thought.

This is the API I am currently working towards:

  ez_workgroup_t *ez_open(int row, int col, int nrows, int ncols);
  void ez_close(ez_workgroup_t *wg);

  int ez_load(ez_workgroup_t *wg, const char *path, int row, int col, int nrows, int ncols);
  void *ez_addr(ez_workgroup_t *wg, int row, int col, const char *name);

  void ez_start(ez_workgroup_t *wg);
  void ez_reset(ez_workgroup_t *wg);

(I called it ez_ to avoid namespace clashes and I use z a lot in software, not to make it sound like easy; although it turned out that way - remember, it's e-zed, not e-zee!)

At present ez_workgroup_t is a subclass of e_dev_t so they can just be cast as well, although i'm not sure how long adapteva will keep the api the same such that this works.

The idea here is that a workgroup represents a rectangular group of cores which will be cooperating together to work on a job. It doesn't quite fit with the opencl workgroup topology of homogenous multi-core, nor particularly with the way i think the parallella sdk is heading; but it can be fine-tuned.

In the end it was more work thinking up the api than implementing it - I already have much of it working. Because I was too lazy I just tried out the single-core test code but one at a time on different cores. It worked.

Issues remaining

  1. The loader relocates each loaded code once and then splats it to each target core. This means they all share off-core addresses and there is no ability to create thread-local / private storage.

    I will think about this, probably the easiest way is to implement support for thread-local storage sections. Even without compiler support this can (sort of) just go straight into the loader, it would require some more work since it would have to handle each target core separately. Otherwise it would need some runtime setup.

  2. There's still no easy way to create linkage between on-core code.

    e-lib has some support for relocating a known address to a different core - but that only works automatically if you're running the same code on that core. In a non-homogenous workgroup scenario you have to find this information externally.

    Oh hang on, actually I can do this ... nice.

    Code for each core can define weak references to code in other cores. Once all the code has been loaded into the workgroup it could just resolve these references. The code would still need to use the e-lib functions (or equivalent) for turning a core-relative-address into a grid-relative one, but this is not particularly onerous compared with having to manually extract symbol values from linker files at link or run-time ...

  3. But if i do that ... what happens with .shared blocks? Are they really shared globally amongst all grids in the workgroup, or are they local to each grid? Hmm ...

    Doable technically, once I decide on an acceptable solution. Probably a new section name is needed.

  4. The newly published workgroup hardware support stuff should map nicely to this api. e.g. blatting on the code and starting the workgroup.

  5. It would be nice to have a way to manage multiple programmes at once in some sort of efficient manner. e.g. being able to change the set of programmes on-core without having to relocate them again and re-set the shared data space. Because LDS is so small real applications will need to swap code in and out repeatedly.

  6. I'm not aiming for nor interested in creating an environment for loading general purpose code which wasn't written for the parallella. This is for loading small kernels which will have all code on-core and only use external memory for communicatons.

I guess once I get the globally weak reference stuff sorted out I can finally tackle some multi-core algorithms and go back to the face recognition stuff. But today is too nice to spend inside and so it waits.

Update: I had another short hacking session this morning and came up with a simple solution that handles most problems without too much effort.

First, I changed the load api call to 'bind' instead, and added a separate load call.

  int ez_bind(ez_workgroup_t *wg, const char *path, int row, int col, int nrows, int ncols);
  int ez_load(ez_workgroup_t *wg);

Internally things are a bit different:

  • Bind opens the elf file and lays out the sections - but does no reloc processing nor does it write to the target cores.
  • A separate link is performed which processes all the reloc sections, and resolves symbols across programs. Again this is all implemented off-core.
  • The load stage just copies the core binaries to the cores. This is literally just a bunch of single memcpy's per core, so it should be possible to switch between programs relatively quickly.

At the moment the linking/reloc is done automatically when needed - e.g. if you call ez_addr, but it might make sense to have it explicit.

When working on the external-elf linking I realised I didn't need to implement anything special for globally-shared resources, they can just be weak references as with any other run-time linked resources.

e.g. program a, which defines a work-group shared block and references a buffer local to another core:

struct interface shared __attribute__((section(".bss.shared")));
extern void *bufferb __attribute__((weak));

int main(void) {
        // needs to map to global core-specific address
        int *p = e_get_global_address(1, 0, bufferb);

        p[0] = 10;
        shared.a = shared.b;
}

And program b which references the shared block from program a, and defines a core-local buffer (in bank 1):

extern struct interface shared __attribute__((weak));
int bufferb[4]  __attribute__ ((section(".bss.bank1")));

int main(void) {
        // can use shared or bufferb directly
}

One of the other issues I was worried about was allocating thread-private storage. But that can just be done manually by reserving specifics slots in shared resources in a more explicit manner.

No comments: