As with all DBMS optimisation questions, it depends on your execution engine.
I would start with the simplest scenario, four separate indexes on each of the columns.
This will ensure that any queries using those columns in a way you haven't anticipated will still run okay (a fieldx/fieldy/field1
index will be of zero use to a query only using fieldy
).
Any decent execution engine will efficiently choose the index with lowest cardinality first so as to reduce the result set and then perform the other filters based on that.
Then, and only if you have a performance problem, you can look into improving it with different indexes. You should test performance on production-type data, not any test databases you have built yourself (unless they mirror the attributes of production anyway).
And keep in mind that database tuning is rarely a set-and-forget operation. You should periodically re-tune because performance depends both on the schema and the data you hold.
Even if the schema never changes, the data may vary wildly. Re your comment "I just cant experiment by applying indexes and explaining the query", that's exactly what you should be doing.
If you're worried about playing in production (and you should be), you should have another environment set up with similar specs, copy the production data across to it, then fiddle around with your indexes there.