You should ask yourself how valuable it is to display data for every iteration, and what about this data the user really cares about. I think the main thing you need to do here is just reduce the amount of data you display to the user.
For example, if the user only care about the trend, then you could easily get away with evaluating these functions only every so many iterations (instead of every iteration). On the graph above, you could probably get just as informative a plot by drawing only the value on the curve every 100 iterations, which would reduce the size of your data set (and the speed of your drawing algorithm) by a factor of 100. Obviously, you could adjust this if you happen to need more detail.
To avoid having to recompute data points when you redraw, just keep around the small set of points you've already drawn in memory instead of recomputing or reloading all the data. You can avoid going to disk this way, and you won't be doing nearly as much work getting all those points rendered again.
If you're concerned about things like missing outliers due to sampling error, a simple thing you can do would be to compute the set of sample points based on sliding windows instead of single samples from the original data. You might keep around max, min, mean, median, and possibly compute error bars for the data you display to the user.
If you need to get really aggressive, people have come up with tons of fancy methods for reducing and displaying time series data. For further information, you could check out the wikipedia article, or look into toolkits like R, which have a lot of these methods built in already.
Finally, this stackoverflow question seems relevant, too.