Take a step back and consider how a size-based cache works. An LRU policy will remove the last recently used entry. An entry is considered to be used when it is accessed (read) or mutated (written). The most common algorithm is to have a doubly-linked list cross-cut the hashtable so that a table lookup finds the entry in the list chain in O(1) time. The list maintains the ordering where the head is the least-recently-used and the tail is the most-recently-used, so an accessed entry is moved to the tail. On an eviction, the head entry is the victim and it is unlinked and removed removed from the table.
Now think about expiration. The policies found in practice will reset the timeout based on the last read (time-to-idle) or the last write (time-to-live). This may sound similar to the LRU algorithm described above, as it maintains ordering based on the same access patterns. The only difference is when the cache determines that an eviction is needed: size vs. time. Thus, we can apply the same algorithm and just change how we detect an overflow!
This means that you can implement an LRU algorithm and generalize the evaluation predicate to allow switching between a capacity bound to a time bound. The result is a time and space efficient expirable cache.