There are two ways to view such a problem. One can look at this as primarily a smoothing problem, using a filtering tool to smooth the data, only afterwards to interpolate using some variety of interpolant, perhaps an interpolating spline. Finding a local maximum of an interpolating spline is an easy enough thing. (Note that you should generally use a true spline here, not a pchip interpolant. Pchip, the method employed when you specify a "cubic" interpolant in interp1, will not accurately locate a local minimizer that falls between two data points.)
The other approach to such a problem is one that I tend to prefer. Here one uses a least squares spline model to both smooth the data and to produce an approximant instead of an interpolant. Such a least squares spline has the advantage of allowing the user a great deal of control to introduce their knowledge of the problem into the model. For example, often the scientist or engineer has information, such as monotonicity, about the process under study. This can be built into a least squares spline model. Another, related option is to use a smoothing spline. They too can be built with regularizing constraints built into them. If you have the spline toolbox, then spap2 will be of some utility to fit a spline model. Then fnmin will find a minimizer. (A maximizer is easily obtained from a minimization code.)
Smoothing schemes that employ filtering methods are generally simplest when the data points are equally spaced. Unequal spacing might push for the least squares spline model. On the other hand, knot placement can be an issue in least squares splines. My point in all of this is to suggest that either approach has merit, and can be made to produce viable results.