As much as quite a few people might like quoting lines from mediocre science fiction and such (sorry, but it is -- read The Moon is a Harsh Mistress for comparison), it would make almost no difference. For most code right now, there's enough memory and a fast enough CPU, so being larger or faster would make no difference at all.
For the vast majority of code, the limits on what we can accomplish are imposed by our ability to define what we want the computer to do, not in getting the computer to do it fast enough or fit it into available memory.
Even for problems like artificial intelligence, the primary problem isn't getting the computer to execute the code fast enough or fit it all in memory -- it's deciding what we want the computer to do.
Edit: the one area I can see a really serious difference would be encryption -- with infinite speed and memory, most current forms of encryption would be demolished instantly (or at least as soon as they were used to transmit any amount of data exceeding the Shannon limit).
As far as the rest goes: yes, it could "solve" the game of chess, if there is a solution (i.e., if there's a forced win from any board position it could find it, going all the way back to the possibility that white may have an automatic forced win. For the most part, the difference would be purely academic -- Deep Blue is already dependably better than all by a handful of the top Grand Masters.
As far as programming languages go, the one obvious difference is that (for example) some types of operator overloading in some current languages are made illegal because it's necessary to keep compilation from becoming NP-complete, and with infinite speed and memory, that wouldn't matter. At least initially, the effect would still be pretty minimal though. The question is whether that capability could lead to substantial language improvements once we were used to it -- maybe it would eventually, but certainly not overnight (we already have some languages that are fearsomely difficult to compile).