Hello everyone,
I was trying to use the program cURL inside of BASH to download a webpage's source code. I am having difficulty when trying to download page's code when the page is using more complex encoding than simple HTML. For example I am trying to view the following page's source code with the following command:
curl "http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones"
However the result of this doesn't match the source code generated by Firefox when I click "View source". I believe it is because there are javascript elements on the page, but I can not be sure.
For example, I can not do:
curl "http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones" | grep "Access to 4G speeds"
Even though that phrase is clearly found in the Firefox source. I tried looking through the man pages but I don't know enough about the problem to figure out a possible solution.
A preferable answer will include why this is not working the way I expect it to and a solution to the issue using curl or another tool executable from a Linux box.
EDIT:
Upon suggestion below I have also included a useragent switch with no success
curl "http://shop.sprint.com/NASApp/onlinestore/en/Action/DisplayPhones?INTNAV=ATG:HE:Phones" -A "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3" | grep -i "Sorry"