On a Linux system, I have a very large text file and I need to create a new text file which contains every line between the first and last of occurrence of a particular sessionId (those lines included).
I guess I probably need to use sed or something?
As a bonus, sometimes I won't know which log file will contain the session trace. So a script that can work with regular expressions would be ideal. In this case I would expect the script to find the first file with the sessionId in it and then crop that file before exiting.
Example Log file looking for sessionId 1111-ABCD-1111-SOME-GUID :
line one containing other session id: 2222-ABCD-1111-SOME-GUID blaa blaa blaa
line two blaa blaa blaa
line three containing my session id: 1111-ABCD-1111-SOME-GUID blaa blaa blaa
line four containing other session id: 2222-ABCD-1111-SOME-GUID
line five blaa blaa blaa
line six containing other session id: 3333-ABCD-1111-SOME-GUID blaa blaa blaa
line seven containing other session id: 2222-ABCD-1111-SOME-GUID
line eight containing my session id: 1111-ABCD-1111-SOME-GUID blaa blaa blaa
line nine containing other session id: 3333-ABCD-1111-SOME-GUID
line ten containing my session id: 1111-ABCD-1111-SOME-GUID
line eleven
line twelve containing other session id: 3333-ABCD-1111-SOME-GUID blaa blaa blaa
line thirteen containing my session id: 1111-ABCD-1111-SOME-GUID
line fouteen blaa blaa blaa
line fifteen containing other session id: 3333-ABCD-1111-SOME-GUID blaa blaa blaa
output file would contain lines three to thirteen inclusive.