I would imagine that, if you can set up the database so that it automatically shrinks and repairs as needed, that is pretty darn thorough. It is why there is a dearth of literature about best practices; there is a prevailing assumption that it "just works." So any guidance you get in this area is going to be vague.
Here is a (partial) quote from a webcast at http://www.microsoft.com/web/library/Details.aspx?id=sql-server-2008-compact-express-depoly-manage
Maintaining your SQL Server Express
Editions is fairly similar to managing
any other multi-user database, meaning
that we have the option to go in and
deal with file groups, we can deal
with backup options and recovery
models and what not. [But] when we deal
with compact editions or SQL Service
CE, we don’t have nearly as many
options. Really, the only options we
have is how we want to deal with
shrink and repair.
Here is another one from MSDN at
http://msdn.microsoft.com/en-us/library/ms838028.aspx#youcantakeitwithyou_sqlserverce_topic4
Notice that they give good detail about the architecture of the database, but the still don't give a maintenance schedule. Their advice: do it when the database begins to get slow. Also note that this advice is circa 2005, and things have improved since then; i.e. the maintenance procedures have now been automated.
Keep Your House (or Database) in Order
Another big factor in the
performance of large databases in SQL
Server CE 2.0 is the organization of
the database structure itself. As your
application modifies the contents of
the database, the records become more
randomly distributed within the
database file structure. This factor
is especially true after a large
number of inserts and deletes. To
ensure optimal access to the database,
compact the database after any
substantial change to the contents.
In addition to recovering unused
space, performing a compact on the
database has two notable impacts on
performance: first, it stores all
table records in order by their
primary key; second, it updates the
statistics used by the query
processor.
Ordering the records by primary key
can notably improve primary key
access. This is due to the
page-oriented nature of SQL Server CE
(and most other databases). Rather
than loading individual records from
the database into memory, SQL Server
CE loads blocks of records called
pages. When the database records are
grouped in order by primary key,
loading the page containing one record
automatically loads those records with
similar primary key values. For most
applications, this results in what's
referred to as a good "hit rate,"
which means that when your application
goes to access successive database
records, there is a strong likelihood
that the page containing those records
is already in memory and can be
directly accessed. When records are
more randomly distributed, as often
happens after a large number of
inserts and deletes, there is a poor
hit rate requiring SQL Server CE to
retrieve more pages from the database
file to access the same number of
records.
The query processor statistics
influence how the query processor
determines the best method for
locating records. Decisions like
whether to use a key or do a
sequential scan to locate a particular
record are all influenced by the query
processor statistics. As the
statistics become stale, there is an
increased likelihood that the query
processor may make a less than optimal
decision. Performing a compact
refreshes these statistics.
I sympathize with your experience with Access databases. However, I think you will find that your experience with SQL Server CE bears little resemblance.