Yes, they are very different in theory, and in implementation.
Lexers are used to recognize "words" that make up language elements, because the structure of such words is generally simple. Regular expressions are extremely good at handling this simpler structure, and there are very high-performance regular-expression matching engines.
Parsers are used to recognize "structure" of a language phrases. Such structure is generally far beyond what "regular expressions" can recognize, so one needs
"context sensitive" parsers to extract such structure. Context-sensitive parsers
are hard to build, so the engineering compromise is to use "context-free" grammars
and add hacks to the parsers ("symbol tables", etc.) to handle the context-sensitive part.
Neither lexing nor parsing technology is likely to go away soon.
They may be unified by deciding to use "parsing" technology to recognize "words", as is currently explored by so-called scannerless GLR parsers. That has a runtime cost, as you are applying more general machinery to what is often a problem that doesn't need it, and usually you pay for that in overhead. Where you have lots of free cycles, that overhead may not matter. If you process a lot of text, then the overhead does matter and classical regular expression parsers will continue to be used.