views:

218

answers:

2

My gut tells me that putting one format in another is wrong, but I can't seem to come up with concrete reasons.

<root>
 <stuff>
  thing
 </stuff>
 <more>
  <[!CDATA[{"a":["b","c"]}]]>
 </more>
</root>

versus just putting it in the xml

<root>
 <stuff>
  thing
 </stuff>
 <more>
  <a>
   b
  </a>
  <a>
   c
  </a>
 </more>
</root>

The two sections are logically going to be parsed by different code, but as an interchange format, is it ok to mix and match syntax?

Does your answer change if we have an existing endpoint that parses the JSON response? We would have to recode this endpoint for XML ingestion.

+7  A: 

As an interchange format using two formats puts extra burden on people who want to inter-operate with you. Now they need to have an XML parser and a JSON parser.

It also makes it harder for people to grok the format, as they have to mentally switch gears when thinking about different parts of your file.

Finally, you won't be able to easily do things that look at the entire structure at once. For example, you can't use XPath to grab JSON bits, nor can you treat the entire response as a JavaScript object. By mixing two formats you get a "worst of both worlds" problem when it comes to manipulating the data.

Laurence Gonsalves
Does your answer change if we have an existing endpoint that parses the JSON response? We would have to recode this endpoint for XML ingestion.
Paul Tarjan
Since you already have a solution - ie; a JSON parser on top of an XML parser - then the answers to your question will be in respect to portability, readability, maintainability, and plain old fashioned taste. Sure it works *now*, but think of who might read it in the future, who you might pass the XML to, how you will explain to them why they need a JSON parser at the end of it, and so on. It does rather seem like you're putting off work for now that may result in much more work being created later.
brownstone
@Paul: If it works for your current implementation there's no reason to change it right now. However if you start coming up with creative (read: stinky) ways of getting around the issues Laurence described that's when you want to stick with one or the other.
Spencer Ruport
+3  A: 

It's sorta like the database normalization debate. It's cleaner and more elegant to do everything in pure XML (or normalize your database schema), that way you're not unnecessarily coupled to your particular implementation. But if you have to then convert the XML to JavaScript objects (or join 5 tables for every damned SELECT), you may end up writing lots of extra code and incur unnecessary performance hits.

It all depends on how you balance convenience with formal correctness. If this is an XML interchange format that will be standardized by the W3C and used by millions then dear God, do not use JSON. If this is for an in-house app that'll only be processed by code you yourself have written then screw it, just throw the JSON in there and move on!

John Kugelman
+1. good practices are good. but they cannot replace good thinking. sometimes know patterns are not the best solution to a problem. sometimes a hack is the right thing. anything you do, know WHY you do it, and the more messy it is, the more you need to document it, and to encapsulate the messyness somewhere, so you can easily replace it some day ... :)
back2dos