tags:

views:

379

answers:

2

Erlang support to partition its nodes into groups using the global_group module. Further, Erlang supports adding nodes on the fly to the node-network. Are these two features usable with each other?
As far as I understand, you have to name every node on startup to use the global groups.

+2  A: 

A node is an executing Erlang runtime system which has been given a name, using the command line flag -name (long names) or -sname (short names).

A hidden node is a node started with the command line flag -hidden. Connections between hidden nodes and other nodes are not transitive, they must be set up explicitly. Also, hidden nodes does not show up in the list of nodes returned by nodes(). Instead, nodes(hidden) or nodes(connected) must be used. This means, for example, that the hidden node will not be added to the set of nodes that global is keeping track of.

So in short , yes, you need to give our node a name to be able other nodes to find it.

It feels that you either are asking without trying out or have a very complex question and maybe an example of what you are trying to accomplish could make it possible to give a better answer.

Jonke
+1  A: 

Looking at the global_group source, the list of nodes is part of the config checked by the nodes as they synchronise.

There is however an exported function global_group:global_groups_changed which handles the node-list changing.

That's called from kernel:config_change (See Module:config_change/3) so it's certainly possible to add new nodes to a global_group during a release upgrade (OTP embedded-systems style) (See "Updating Application Specifications")

It might be possible to simply do:

application:set_env( kernel, global_groups, [GroupTuple|GroupTuples] ),
kernel:config_change( [ { global_groups, [GroupTuple|GroupTuples] } ], [], [] )

Assuming you already had a global_groups configuration, or

application:set_env( kernel, global_groups, [GroupTuple|GroupTuples] ),
kernel:config_change( [], [{ global_groups, [GroupTuple|GroupTuples] }], [] )

if you are configuring global_groups into a cluster where it didn't already exist.

You need to do the above on each node, and if they decide to sync during the process, they'll split down the lines of the config difference. (See the comment in the global_group source about syncing during a release upgrade)

But once that's been done to all of them,

global_group:sync()

should get everything working again.

I haven't tested the above recipe, but it looks tasty to me. ^_^

TBBle
Nice overview thanks :) I have to look that through.
ZeissS