tags:

views:

386

answers:

3

I am considering migrating from subversion to git. One of the things we use subversion for our sysadmins to manage things like configuration files. To that end, we put $URL$ into each file, which expands to the file's location in the subversion tree. This lets the admins look at a file on some arbitrary host and figure out where in the tree it came from.

The closest analog I could find is gitattributes. There is the filter= directive, but it seems that git doesn't communicate to the filter what filename it's filtering, which would be necessary to turn $URL$ into a path.

There is also the ident directive, which would turn $Id$ into the blob hash. This might be usable if one could map that back into a pathname, but my git-fu isn't strong enough.

Any suggestions?

The workflow is as follows:

  1. Admin commits changes to the VCS repo
  2. Admin updates a central location that has checked out the repo
  3. Admin pulls the changes to the host using cfengine
A: 

Work on your git-fu, grasshopper.

T.E.D.
I am doing that, but it would be helpful if somebody who knows git more than I do could give me a hint as to which of the 150+ git man pages I should be focusing on.
Rudedog
A: 

As mentioned in "Does git have anything like svn propset svn:keywords or pre-/post-commit hooks?", Git does not support keyword expansion.

"Dealing with SVN keyword expansion with git-sv" provides a solution based on git config filter (which is not exactly what you want) and/or gitattributes.


The closest example if file information expansion I have found it still based on the smudge/clean approach, with this git Hash filter, but the clean part removes it from the file, and no path can be found.

This thread actually spells it out (as well as mentioning some git-fu commands which might contain what you are looking for, I have not tested them):

Anyway, smudge/clean does not give the immediate solution to the problem because of smaller technical shortcomings:

  • smudge filter is not passed a name of file being checked out, so it is not possible to exactly find the commit identifier.
    However, this is alleviated by the fact that 'smudge' is only being run for the changed files, so the last commit is the needed one.

  • smudge filter is not passed a commit identifier. This is a bit more serious, as this information is nowhere to get from otherwise.
    I tried to use 'HEAD' value, but apparently it is not yet updated at the moment 'smudge' is being run, so the files end up with the date of the "previous" commit rather than the commit being checked out.
    "Previous" means the commit that was checked out before. The problem gets worse if different branch is checkout out, as the files get the timestamp of a previous branch.

AFAIR, lack of information in smudge filter was intentional, to discourage this particular use of smudge/clean mechanism. However, I think this can be reconsidered given the Peter's use case: "checkout-only" workspace for immediate publishing to webserver.
Alternatively, anyone interested in this use case could implement additional smudge arguments as a site-local patch.

And then, there are small annoyances, which seems to be inevitable: if you change 'clean' filter and check out earlier revision, it will be reported as having modifications (due to changed 'clean' definition).

VonC
I have already read and discounted those first two links because neither of them do what I am looking for. I am also aware of gitattributes, which I thought I had made clear by the fact that I referenced them in my post.
Rudedog
+1  A: 

Coming at the problem from a completely different angle, how do the files in question end up on the end hosts? I guess today it is either checked out there directly, or copied somehow from an already checked out repository on another host?

If so, could you modify your process so that files are checked out to a git repository, and a script does the $URL$ or other keyword expansion after checkout. That way you can do whatever substitutions you like, and only be limited by what can be figured out by a script in a checked out repository.

calmh
+1. Much closer to the production environment I know of: you do not want any VCS tool on an end host. An intermediate environment is much safer.
VonC
We do have a master admin server that has a read-only checkout of the VCS. I thought about doing the edits post-checkout, but the problem is that git then thinks that all of the files have been modified and a subsequent git pull will fail.I've also thought about refreshing the central staging area using git-archive but this will cause all kinds of problems with changes in file timestamps, etc.
Rudedog
I agree, it's not optimal. It **could** work, by doing something close to `git reset --hard ; git pull ; do-substitutions ; deploy-files`, but still not quite what you're looking for. :)
calmh