views:

84

answers:

5

I want to create a table that will contain dynamic data, it can be in a form of a date, boolean or a text article

for example:

meta_key = "isActive" meta_valu = "1"

or

meta_key = "theDate" meta_value = "Sat Jul 23 02:16:57 2005"

or

meta_key = "description" meta_value = "this is a description and this text can go on and on so i need a long field"

The question is what type of field should meta_value be in order to not inflate the DB too much for every "1" inserted, which fields are dynamic and will only consume the space of their own length

hope I was clear...

A: 

You probably want the VARCHAR field type.

In contrast to CHAR, VARCHAR values are stored as a one-byte or two-byte length prefix plus data.

rjp
So ill just define meta_value as a huge varchar and just make sure that the user input is not higher than the field's max?
tridat
That'd work, yeah, unless you ever wanted more than 65535 bytes in there. If you do then TEXT (up to 4GB) as suggested by @Treby would work (but can cause issues, see http://dev.mysql.com/doc/refman/5.0/en/blob.html )
rjp
I think VARCHAR actually uses more spaces (the one or two byte prefix). It's contents are still embedded in the row data (as opposed to a TEXT type field where the row contains a reference) and will therefore use the maximum length of storage space.
Bart van Heukelom
A: 

Hope this helps:

datatype=Text
Treby
+1  A: 

I would only use an unstructured data model, like how you suggest, if you are storing unstructured data or documents (e.g. friendfeed).

Alternative storage thoughts

There are many more suitable data storage systems for unstructured data than SQL server. I would recommend combining one of these with your existing structured database.

SQL Options

If you can't do this and must store unstructured data in your SQL DB, you have a couple of options, the datatype isn't really the only concern, how your data is stored is.

  • Some structure to allow an application reading the data to be able to easily parse the data without complex string manipulation functions.

  • Be able to define a model for the data in your application, so when you read the data, you know what you've got.

The following 2 options provide a solution to both these challenges...

XML - xml data type

You need to concider the data you are storing. If you need to return it and perform complex searches on the contents, then XML is your best bet. It also allows you to validate that the data stored matches a defined structure (using a dtd). See this article.

http://msdn.microsoft.com/en-us/library/ms189887.aspx

or JSON - nvarchar(max) datatype

If you need to return this data for display on a webpage or use in a Javascript, then storing as JSON would be easiest to work with. You can easily load it into an object model which can be worked with directly and manipulated. The downside is that complex searches on the data will be very slow compared to XPATH (iterate through all the objects, find ones that match).

If you are storing data from other languages or strange characters go with nvarchar (unicode version). Otherwise varchar would be most efficient.

badbod99
I'm not sure "do it a completely different way" is all that helpful an answer, you know. "Schemaless" tables are a fairly common way of storing flexible data and can work quite well (see FriendFeed, for example: http://bret.appspot.com/entry/how-friendfeed-uses-mysql)
rjp
It totally depends on your situation, friendfeed is a very specific application and they use it that way as their data is unstructured. If you are storing unstructured data, look at CouchDB (document storage), BigTable (just one big table!) or Lucine with a file system. I would say a relational database just isn't what you need for this purpose.
badbod99
+1 I especially like the idea of using Lucene to index a collection of flat files is a very good solution to the "schemaless" data that doesn't fit into the relational paradigm easily.
Bill Karwin
A: 

Are these being used as temp tables or live tables?

Here's an idea I haven't seen yet, but MAY work for you, if you are primarily worried about size explosion, but don't care about having the program do a little extra work. However, I believe the best practice is to create these meta keys with fields in their own table (for example, OrderDate), and then you can have descriptions, dates, etc. A catchall DB table can make for a lot of headaches.

Create the meta table, using this idea:

MetaID MetaKey MetaVarchar(255) MetaText MetaDate

varchar, text, and date can be null.

Let the inserting program decide what cell to put it in, and the database call will just show whatever field wasn't null. Short items will be in varchar, long ones in text, and date you can use so that you can change the way dates are shown.

Cryophallion
A: 

In MySQL I generally use the blob datatype which I store a serialized version of a dynamic class that I use for a website.

A blob is basically binary data, so once you figure out how to serialize and de-serialize the data you should for the most part be golden with that.

Please note that for large amounts of data it does become much less efficient but then again it doesn't require you to change your whole structure.

Here is a better explanation of the blob data type: http://dev.mysql.com/doc/refman/5.0/en/blob.html

David