The easiest way to do this is to send data from a device through the network being tested and then receive it back again on the same device.
This allows you to easily calculate the time taken for each packet (delay), the variance in time for different packets (jitter), and to detect any packet loss. Note that packet loss is a relative thing - a packet may be just delayed for a long time rather than lost. Typically if a packet is not received within a certain jitter window it may be declared as lost.
In the real world you usually want to test from point 'A' to point 'B' (i.e. not just loping back to point 'A'). For a voice or video codec (encoder) which sends packets at a regular interval this is straightforward as you know that the second packet should arrive a given time after the first one and if it does not it is delayed (or has arrived early). From this you can calculate the jitter at point 'B'. Any packet not arriving (within the period you allow for packets to arrive) will be counted as a lost packet. Note that how a sample is encoded can cause issue with jitter calculation, although if you are creating a test application where you control the encoding yourself you can avoid these issues - see the link below for more on this: http://www.cs.columbia.edu/~hgs/rtp/faq.html#jitter
One other thing to note is that you have not mentioned delay but it can be very important as you might have a network with no packet loss and excellent jitter, but with a large delay and this can have a dramatic affect on some applications (e.g. voice). As a simple and not very realistic example say you have a perfect network from a jitter and packet loss point of view, but with a router which does some sort of security lookup and hence adds a two second delay to every packet. Because it is the same delay for each packet your jitter will be fine, but for a two way voice application the two second delay between someone at point 'A' speaking and someone at point 'B' hearing them will be a major issue.