It depends on a couple of things.
Firstly, the distribution of values. If you only have five distinct values but one of them accounts for 99.9999% of rows in the table then obviously you would not want the optimiser to use the index for that value but you might want it to use it for the others. In some cases like this it's worth using a function-based index to ensure that you only index the values of interest and not the ones that are just taking up space.
Secondly, are there queries that can be answered using that index without accessing the table?
Note that it's not just the percentage of rows that will be accessed that matters, but the number of blocks of the table that will need to be accessed. For example if you have a table of 1000 blocks and 30 rows per block on average, and one column has 30 distinct values (each one being present in 1000 rows), then the number of blocks that need to be visited to read every row for a single value varies between 1000/30=34 (worth using an index) and 1000 (not worth using an index) depending on how the rows are distributed. this is expressed by the clustering factor of the index -- if it's value is close to the number of rows in the table then the index is less likely to be used, and if it's close to the number of blocks then it's more likely to be used.
also, you might look at index compression to see if that saves you space.
Be careful with bitmap indexes -- they are not friendly to systems where they are subject to modification by multiple sessions at the same time (eg. two people both inserting rows at the same time into the indexed table).
A more effective strategy if you do want to improve the efficieny of queries with predicates on these five values is to use partitioning, partly because of partition pruning in the query but also because of the improvement in statistics available to the optimiser when it knows that only one partition will be accessed and can use partition-level statistics instead of global statistics.