In the input layer have X separate nodes for each dimension (weather, wind, etc) of input data, where X is the number of days to look back to (let's say 4-7). Then you should normalize each input dimension in a suitable range, let's say [-1.0, 1.0].
Have a second "hidden" layer fully interconnected with the first layer (and also with a fix 1.0 input "bias" node to serve as a fix point). There should be less nodes here than in the input layer, but that's just a rule of thumb, you may need to experiment.
The last layer is your output layer fully interconnected with the second layer (and also drop in a bias). Have a separate output neuron for each dimension.
Don't forget to train with the normalized values on both the input and output. Since this is a time series, you may not need to randomize the order of training data but feed them as they come in time - your net will learn the temporal relations also (with luck :)
(Also note that there is a method called "temporal backpropagation" which is tuned for time series data.)