This isn't exactly what you asked for, but the net effect is the same to the users of the system so it is worth looking into.
You can set Archiva up to proxy remote repositories by using proxy connectors.
Using this mechanism you could configure G2 to have a proxy of G1, this means when any artifact is deployed to G1, it would be available at G2 via the proxy mechanism.
From the documentation:
A proxy connector is used to link a managed repository (stored on the Archiva machine) to a remote repository (accessed via a URL). This will mean that when a request is received by the managed repository, the connector is consulted to decide whether it should request the resource from the remote repository (and potentially cache the result locally for future requests).
Each managed repository can proxy multiple remote repositories to allow grouping of repositories through a single interface inside the Archiva instance. For instance, it is common to proxy all remote releases through a single repository for Archiva, as well as a single snapshot repository for all remote snapshot repositories.
A basic proxy connector configuration simply links the remote repository to the managed repository (with an optional network proxy for access through a firewall). However, the behaviour of different types of artifacts and paths can be specifically managed by the proxy connectors to make access to remote repositories more flexibly controlled.
If proxy connectors won't work for you, you could look into alternative replication approaches. I would reconsider though as any homegrown solution is likely to introduce issues as users modify the repository contents.
- As long as you only allow deployment to one of the nodes, you can use rsync or robocopy to replicate the storage location between the nodes.
- You can write a custom plugin that listens for the get and delete events and fires a corresponding event to the other node.