views:

361

answers:

6

At work we have an application to play 2K (2048*1556px) OpenEXR film sequences. It works well.. apart from when sequences that are over 3GB (quite common), then it has to unload old frames from memory, despite the fact all machines have 8-16GB of memory (which is addressable via the linux BIGMEM stuff).

The frames have to he cached into memory to play back in realtime. The OS is a several-year old 32-bit Fedora Distro (not possible to upgradable to 64bit, for the foreseeable future). The per-process limitation is 3GB per process.

Basically, is it possible to cache more than 3GB of data in memory, somehow? My initial idea was to spread the data between multiple processes, but I've no idea if this is possible..

+2  A: 

How about creating a RAM drive and loading the file into that ... assuming the RAM drive supports the BIGMEM stuff for you.

You could use multiple processes: each process loads a view of the file as a shared memory segment, and the player process then maps the segments in turn as needed.

Rob Walker
+3  A: 

One possibility may be to use mmap. You would map/unmap different parts of your data into the same virtual memory region. You could only have one set mapped at a time, but as long as there was enough physical memory, the data should stay resident.

KeithB
+1  A: 

I assume you can modify the application. If so, the easiest thing would be to start the application several times (once for each 3GB chunk of video), have each one hold a chunk of video, and use another program to synchronize them so they each take control of the framebuffer (or other video output) in turn.

The synchronization is going to be a little messy, perhaps, but it can be simplified if each app has its own framebuffer and the sync program points the video controller to the correct framebuffer inbetween frames when switching to the next app.

Adam Davis
+1  A: 

My, what an interesting problem :)

(EDIT: Oh, I just read Rob's ram drive post...I got all excited by the problem...but have a bit more to suggest, so I won't delete)

Would it be possible to...

  1. setup a multi-gigabyte ram disk, and then
  2. modify the program to do all it's reading from the "disk"?

I'd guess the ram disk part is where all the problem would be, since the size of the ram disk would be OS and file system dependent. You might have to create multiple ram disks and have your code jump between them. Or maybe you could setup a RAID-0 stripe set over multiple ram disks. Or, if there are still OS limitations and you can afford to drop a couple grand (4k?), setup a hardware RAID-0 strip set with some of those new blazing fast solid state drives. Or...

Fun, fun, fun.

Be sure to follow up!

Stu Thompson
A: 

How about creating a RAM drive and loading the file into that

I think this would be the best solution. It would require little modification to the playback tool (which is good, as there's a weird setup with it's source code, so only a few people have access to it).

can afford to drop a couple grand (4k?), setup a hardware RAID-0 strip set with some of those new blazing fast solid state drives.

There is a review machine with an absurd fiber-channel-RAID-array that can play 2K files direct from the array easily. The issue is with the artist-workstations, so it wouldn't be one $4000 RAID array, it'd be hundreds..

We are looking into upgrading the artist workstations to a 64bit OS. We are also reviewing another playback tool.. both of which would solve this problem. I'm not quite sure what the eventual solution will be, but thanks for your responses!

dbr
This is not an answer. Either comment or edit your question.
Geoffrey Chetwood
A: 

@dbr said:

There is a review machine with an absurd fiber-channel-RAID-array that can play 2K files direct from the array easily. The issue is with the artist-workstations, so it wouldn't be one $4000 RAID array, it'd be hundreds..

Well, if you can accept a limit of ~30GB, then maybe a single 36GB SSD drive would be enough? Those go for ~US$1k each I think, and the data rates might be enough. That very well maybe cheaper than a pure RAM approach. There are smaller sizes available, too. If ~60GB is enough you could probably get away with a JBOD array of 2 for double the cost, and skip the RAID controller. Be sure only to look at the higher end SSD options--the low end is filled with glorified memory sticks. :P

Stu Thompson