views:

890

answers:

6

Hi All, Simply Asking, Is there any function available in mysql to split single row elements in to multiple columns ? I have a table row with the fields, user_id, user_name, user_location.

In this a user can add multiple locations. I am imploding the locations and storing it in a table as a single row using php.

When i am showing the user records in a grid view, I am getting problem for pagination as i am showing the records by splitting the user_locations. So I need to split the user_locations ( single row to multiple columns).

Is there any function available in mysql to split and count the records by character ( % ).

For Example the user_location having US%UK%JAPAN%CANADA

How can i split this record in to 4 columns. I need to check for the count values (4) also. thanks in advance.

A: 

You should do this in your client application, not on the database.

When you make a SQL query you must statically specify the columns you want to get, that is, you tell the DB the columns you want in your resultset BEFORE executing it. For instance, if you have a datetime stored, you may do something like select month(birthday), select year(birthday) from ..., so in this case we split the column birthday into 2 other columns, but it is specified in the query what columns we will have.

In your case, you would have to get exactly that US%UK%JAPAN%CANADA string from the database, and then you split it later in your software, i.e.

/* get data from database */
/* ... */
$user_location = ... /* extract the field from the resultset */
$user_locations = explode("%", $user_location);
Bruno Reis
A: 

First normalize the string, removing empty locations and making sure there's a % at the end:

select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1

Then we can count the number of entries with a trick. Replace '%' with '% ', and count the number of spaces added to the string. For example:

select length(replace(str, '%', '% ')) - length(str)
    as LocationCount    
from (
    select replace(concat(user_location,'%'),'%%','%') as str
    from YourTable where user_id = 1
) normalized

Using substring_index, we can add columns for a number of locations:

select length(replace(str, '%', '% ')) - length(str)
    as LocationCount    
, substring_index(substring_index(str,'%',1),'%',-1) as Loc1
, substring_index(substring_index(str,'%',2),'%',-1) as Loc2
, substring_index(substring_index(str,'%',3),'%',-1) as Loc3
from (
    select replace(concat(user_location,'%'),'%%','%') as str
    from YourTable where user_id = 1
) normalized

For your example US%UK%JAPAN%CANADA, this prints:

LocationCount  Loc1    Loc2    Loc3
4              US      UK      JAPAN

So you see it can be done, but parsing strings isn't one of SQL's strengths.

Andomar
thanks for your query. but the count Numberoflocations returning only 1 when the locations are more than one in table
paulrajj
Right, I think I see what you mean now. I'll edit the answer.
Andomar
A: 

This is a bad design, If you can change it, store the data in 2 tables:

table users: id, name, surname ...

table users_location: user_id (fk), location

users_location would have a foreign key to users thorugh user_id field

despart
+2  A: 

The "right thing" would be splitting the locations off to another table and establish a many-to-many relationship between them.

create table users (
   id int not null auto_increment primary key,
   name varchar(64)
)

create table locations (
   id int not null auto_increment primary key,
   name varchar(64)
)

create table users_locations (
   id int not null auto_increment primary key,
   user_id int not null,
   location_id int not null,
   unique index user_location_unique_together (user_id, location_id)
)

Then, ensure referential integrity either using foreign keys (and InnoDB engine) or triggers.

shylent
A: 

I feel dirty even suggesting this... I think it does what you asked for in the question, but obviously has locations hard-coded, so won't scale nicely. Other reponders have detailed the (at least theoretical) flaws in your current rig.

The solution makes use of the REGEXP operator. The weird looking [[:<]] type constructs are matches on word boundaries, strictly not needed here as US/UK/JAPAN/CANADA are not ambiguous. The location-specific columns will take the value 1 or 0 in the result set as appropriate.

SELECT
      user_id
    , user_name
    , user_location REGEXP '[[:<:]]US[[:>:]]' AS US
    , user_location REGEXP '[[:<:]]UK[[:>:]]' AS UK
    , user_location REGEXP '[[:<:]]JAPAN[[:>:]]' AS JAPAN
    , user_location REGEXP '[[:<:]]CANADA[[:>:]]' AS CANADA
    ,   ( user_location REGEXP '[[:<:]]US[[:>:]]' )
      + ( user_location REGEXP '[[:<:]]UK[[:>:]]' )
      + ( user_location REGEXP '[[:<:]]JAPAN[[:>:]]' )
      + ( user_location REGEXP '[[:<:]]CANADA[[:>:]]' ) AS Count
martin clayton
@martin, I given just the example here.. i wont follow the same locations to all users.. the location may be different. so v need to check only how can v split the records to multiple columns using the character.
paulrajj
@paulrajj - You'll need to know the set of columns, i.e. the complete set of locations across all users, before you can form the SELECT statement. So that would require two SELECTs - the first to determine the exact form of the second. Given the flexibility required, I suggest you review your data model - the model in shylent's response looks promising, you'd still need to dynamically generate a query that will map users_locations to separate columns.
martin clayton