The key thing about compression is that it works by removing duplicate values in each block. So your test table needs to have a representative spread of data.
Two extreme tables ...
SQL> create table totally_similar
2 ( txt varchar2(1000) )
3 /
Table created.
SQL> insert into totally_similar
2 select rpad('a', 1000, 'a')
3 from dual connect by level <= 1000
4 /
1000 rows created.
SQL> create table totally_different
2 ( txt varchar2(1000) )
3 /
Table created.
SQL>
Before we compress let's just check the table sizes...
SQL> insert into totally_different
2 select dbms_random.string('A',1000)
3 from dual connect by level <= 1000
4 /
1000 rows created.
SQL> select segment_name
2 , sum(bytes)
3 , sum(blocks)
4 from user_segments
5 where segment_name in ('TOTALLY_SIMILAR', 'TOTALLY_DIFFERENT')
6 group by segment_name
7 /
SEGMENT_NAME SUM(BYTES) SUM(BLOCKS)
-------------------- ---------- -----------
TOTALLY_SIMILAR 2097152 256
TOTALLY_DIFFERENT 2097152 256
SQL>
If we compress them we get two radically different results ...
SQL> alter table totally_similar move compress
2 /
Table altered.
SQL> alter table totally_different move compress
2 /
Table altered.
SQL> select segment_name
2 , sum(bytes)
3 , sum(blocks)
4 from user_segments
5 where segment_name in ('TOTALLY_SIMILAR', 'TOTALLY_DIFFERENT')
6 group by segment_name
7 /
SEGMENT_NAME SUM(BYTES) SUM(BLOCKS)
-------------------- ---------- -----------
TOTALLY_SIMILAR 65536 8
TOTALLY_DIFFERENT 2097152 256
SQL>
Note that TOTALLY_SIMILAR is eight blocks big, even though every single row was the same. So you need to understand the distribution of your data before you can calculate the compression ratio. The Oracle documentation has this to say:
The compression factor that can be
achieved depends on the cardinality of
a specific column or column pairs
(representing the likelihood of column
value repetitions) and on the average
row length of those columns. Oracle
table compression not only compresses
duplicate values of a single column
but tries to use multi-column value
pairs whenever possible.
Its advice when it comes to estimation of the return is that a sample table of 1000 blocks of the target table should give you a good enough prediction (although more blocks give a more accurate forecast). It is hard to tell without knowing your blocksize, but it seems likely that your TEST table is much bigger than it needs to be. The important thing is whether the data in the test table is representative of your target table. So, did you create it using an export or a sample from the target table, e.g.
create table test_table as select * from big_table sample block (1)
/
You will need to adjust the percentage in the SAMPLE() clause to ensure you get at least 1000 blocks.
edit
In most cases compression should actually speed up data retrieval but YMMV. The cost of compression is paid when inserting or updating the data. How much that tax is and whether you can do anything to avoid it rather depends on the profile of your table.