the directive puts the word value -2 and -6 in the code stream. if you look at the actually binary representation of this chunk, you'll find, in the middle of the instruction encoding, some FFFFFFFEFFFFFFFA, or FEFFFFFFFAFFFFFF depending on endianness.
The assembler will emit 2 words worth of data of value -2 and -6 respectively, nothing like a single word of -8.
If you look at what is at the AVG: label, you'll notice it uses
lw $v0, ($ra)
lw $t3, 4($ra)
That loads 2 words in registers v0 and t3 from the return address (ie where you jumped from, i.e. from the data embedded in the code segment). So... v0 gets -2 in it, and t3 gets -6. Note also how the code segments adds 8 to $ra before returning, to jump over the embedded data.
In short, it's a way of encoding constant values to be loaded in registers as part of the code stream.
Now, what the code then does is add the 2 together, shift right, before returning (I assume implementing Average). It does not make much sense in this specific case to do that much work, when you could simply directly compute the average at compile time (or if you write asm directly, in your head). I assume AVG is supposed to be called from many places, but even then, since it expects its values from the code segment (usually readonly), I fail to see the point to compute math on constant values.