tags:

views:

412

answers:

1

Hello Everyone,

I have actually two questions but they are kind of related so here they go as one...

How to ensure garbage collection of tree nodes that are not currently displayed using TreeViewer(SWT.VIRTUAL) and ILazeTreeContentProvider? If a node has 5000 children, once they are displayed by the viewer they are never let go, hence Out of Memory Error if your tree has great number of nodes and leafs and not big enough heap size. Is there some kind of a best practice how to avoid memory leakages, caused by never closed view holding a treeviewer with great amounts of data (hundreds of thousands objects or even millions)? Perhaps maybe there is some callback interface which allow greater flexibility with viewer/content provider elements?

Is it possible to combine deffered (DeferredTreeContentManager) AND lazy (ILazyTreeContentProvider) loading for a single TreeViewer(SWT.VIRTUAL)? As much as I understand by looking at examples and APIs, it is only possible to use either one at a given time but not both in conjunction ,e.g. , fetch ONLY the visible children for a given node AND fetch them in a separate thread using Job API. What bothers me is that Deferred approach loads ALL children. Although in a different thread, you It still load all elements even though only a minimal subset are displayed at once.

I can provide code examples to my questions if required...

I am currently struggling with those myself so If I manage to come up with something in the meantime I will gladly share it here.

Thanks!

Regards, Svilen

+2  A: 

I find the Eclipse framework sometimes schizophrenic. I suspect that the DeferredTreeContentManager as it relates to the ILazyTreeContentProvider is one of these cases.

In another example, at EclipseCon this past year they recommended that you use adapter factories (IAdapterFactory) to adapt your models to the binding context needed at the time. For example, if you want your model to show up in a tree, do it this way.

treeViewer = new TreeViewer(parent, SWT.BORDER);
IAdapterFactory adapterFactory = new AdapterFactory();
Platform.getAdapterManager().registerAdapters(adapterFactory, SomePojo.class);
treeViewer.setLabelProvider(new WorkbenchLabelProvider());
treeViewer.setContentProvider(new BaseWorkbenchContentProvider());

Register your adapter and the BaseWorkbenchContentProvider will find the adaption in the factory. Wonderful. Sounds like a plan.

"Oh by-the-way, when you have large datasets, please do it this way", they say:

TableViewertableViewer = new TableViewer(parent, SWT.VIRTUAL);
// skipping the noise
tableViewer.setItemCount(100000);
tableViewer.setContentProvider(new LazyContentProvider());
tableViewer.setLabelProvider(new TableLabelProvider());
tableViewer.setUseHashlookup(true);
tableViewer.setInput(null);

It turns out that first and second examples are not only incompatible, but they're mutually exclusive. These two approaches where probably implemented by different teams that didn't have a common plan or maybe the API is in the middle of a transition to a common framework. Nevertheless you're on your own.

arcticpenguin
This is a great answer! You put it out nicely - jface API itself contradicts when it comes to using lazy and deferred loading at the same time. Which is a shame. Perhaps it is possible to come up with SWT based solution, handling multi threading by yourself, but I was really hoping that those two teams have accidentaly met during a coffee break and thought "you know, combining both approaches kind of makes sense and will add additional value to our API." Guess not. :(
Svilen