views:

59

answers:

1

Hi,

This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile.

Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner.

I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion.

This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while.

How should I solve this?

Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions?

Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity.

Any thoughts about this solution?

Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html.

Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex.

So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID.

What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to.

Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly.

I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile.

But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution?

Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go.

Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change.

P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

+1  A: 

The first approach is the Since the main issue here is speed, I would prefer the array approach.

You should, of course, maintain the hash table for the name-index lookup.

If I understood correctly, you only process the data one time. So there is no dynamic data insertion.

To deal with the space allocation problem, I would recommend:

1 - Read once the file, to get the number of vertex.

2 - allocate that space

If you data is dynamic, you could implement some simple method to increment the array size in steps of 50%.

3 - In the Edges, substitute you linked list for an array. This array should be dynamically incremented with steps of 50%.

Even with the "extra" space allocated, when you increment the size with steps of 50%, the total size used by the array should only be marginally larger than with the size of the linked list.

I hope I could help.

Khalid
The data is dynamic. There is a user interface for manual insertion. But I also have sample files to populate the database with some initial data (for testing purposes mainly). I don't think I have much time to change my Graph library and all functions associated to handle an array based Graph versus a linked list one. That's why I though of the second solution. But I see what you mean, the Hash Table for name indexing purposes and the Graph already indexes by ID. No need for a second Hash Table. I just don't know if I have the time to make such changes.
Nazgulled
I liked your suggestions, I really do and that's what I'll do if I find the time to implement that approach before the deadline. For now, I think I'll go with solution 2, it's a dirty but quick solution that will have to do for the moment.
Nazgulled