As long as you write strictly sequential programs (do A, then B, then C; finished!) you don't have concurrency problems, and a language's concurrency mechanisms remain irrelevant.
When you graduate from "programming exercise" programs to real world stuff, pretty soon you encounter problems whose solution is multi-threading (or whatever flavor of concurrency you have available).
Case: Programs with a GUI. Say you're writing an editor with spell checking. You want the spell checker to be quietly doing its thing in the background, yet you want the GUI to smoothly accept user input. So you run those two activities as separate threads.
Case: I recently wrote a program (for work) that gathers statistics from two log files and writes them to a database. Each file takes about 3 minutes to process. I moved those processes into two threads that run side by side, cutting total processing time from 6 minutes to a little over 3.
Case: Scientific/engineering simulation software. There are lots and lots of problems that are solved by calculating some effect (heat flow, say) at every point in a 3 dimensional grid representing your test subject (star nucleus, nuclear explosion, geographic dispersion of an insect population...). Basically the same computation is done at every point, and at lots of points, so it makes sense to have them done in parallel.
In all those cases and many more, whenever two computing processes access the same memory (= variables, if you like) at roughly the same time there is potential for them interfering with each other and messing up each others' work. The huge branch of Computer Science that deals with "concurrent programming" deals with ideas on how to solve this kind of problem.
A reasonably useful starting discussion of this topic can be found in Wikipedia.