It's really hard to justify re-engineering something that "works" as is. You spend a lot of work getting back to where you started.
That said.
The transition from EJB 2.1 Session Beans to EJB 3 is pretty trivial. For us, when we made the transition, most of our EJBs were deployed separately rather than in a combined EAR. But you don't have that problem. Even with EJB 3, you may very likely still have an ejb-jar.xml file (or files).
But, there's still benefit, I think, and the cost is very low. You can incrementally do it, bean by bean vs "all at once", which is nice, simply by moving the bulk of the information in the current ejb-jar.xml files in to the annotations within the application. If nothing else, it brings visibility (like transaction requirements, etc.) in to the code, and not "hidden away" in the ejb-jar.xml files.
There's no reason to deploy the "app tier" on to a separate jvm/server. Is the web tier calling Remote session beans? vs Local? You may or may not see a speed up by switching to local calls (many "co-located" deployments can be made similar to a local invocation on some servers if configured properly, dunno if you are doing that already).
The biggest risk of switching to local is that with a remote call, your arguments are "safe" from being changed, since they're serialized over the network. With local semantics, if you change the value of an argument, on purpose or not, (i.e. say, changing the value of a property in a bean), that change will be reflected in the caller. That may or may not be a problem. If they're already using the local call semantics, even for a "remote" bean, then they already have encountered this issue.
As for JPA vs SQL, I'd leave it as is. It's not worth redoing the entire data tier to swtich to JPA, and if you really wanted the benefits of JPA runtime (vs development time), notably caching etc., then you'd have to convert the ENTIRE data layer (or at least large chunks of inter-related parts) all at once. Really risky and error prone.
For the "duplicate jars" issue, that's an artifact of packaging and build, not deployment. To fix the ambiguity issue, you need to work on your development environment to use a shared jar repository, and be cognisant of the fact that if you upgrade the jar for one, you'll upgrade it for all. People decry that that is an unreasonable demand, forcing the entire application to upgrade if a jar changes. For enormous, disparate apps, sure. But for apps in a single JVM, no, it's not. As much as we'd like every little bit to be an isolated world in the teeming soup we call a Java classloader environment, it's simply not true. And the more we can keep that simplified, the better off we in terms of complexity and maintenance. For common jars, you MIGHT consider bundling those jars in to the app server and out of the application. I'm not fond of that approach, but it has it's uses if you can make it work for you. It certainly reduces the deployment size.
Client side, it's not that hard to convert from Struts 1 to Struts 2, as they both very similar at the high level (notably, they're both action frameworks). The key here is that both frameworks can live side by side with each other allowing, again, incremental change. You can slowly migrate old code over, or you can solely implement new code in the new framework. This is different from trying to mix and match an action framework and a component framework. That's literally a "dogs and cats, living together" situation. If I were to go that route, I'd simply deploy the component stuff in their own WAR and move on. The state management of component frameworks makes interoperating with them on the back end really troublesome. If you choose to implement via a new WAR, make sure you spend a little time doing some kind of "Single Sign On" so folks are "logged in" to each module as appropriate. As long as the apps don't share any session state, this is as far as the integration really needs to go. And once you've chosen to add a new subsystem via a new WAR, you can use any tech you want for the client side.
Caching is a different issue. The different caches solve different problems. It's one thing to cache and memoize some little bits within the system (like JSP renderings), or to use a distributed cache to transfer sessions across instance during failover or load balancing. It's quite another to have a cache based domain layer where the persistence and caching are very, very tightly integrated. That's far more complex. Just keeping it all straight in your head is painful.
The former you can pretty much sprinkle willy nilly across the application as you encounter a needs, and those kinds of caches can be pretty much stand alone rather than part of a coordinated, overarching caching framework.
The latter, is a different. There you need to pretty much redo your entire data model, even for parts that you're not caching at all, as you want to ensure that you have consistent access to the data and it's cache views.
This is effectively what JPA does, with its two levels of caching, and why I mentioned earlier it's not something you can casually slip in to an application, save for mostly stand alone chunks of your system. When you have distinct modules hitting the same backend resources, cache coherence and consistency becomes a real issue, and that's why you want those integrated on both systems.
Mind, it can be done. The trick is simply integrating the data access level, and then you can start caching at that level. But if you have folks making direct SQL calls, those have to go.
Finally, I think the term to use is evolution, not revolution. Migrating to EJB 3 or 3.1 I don't think has to be painful, as it pretty much Just Works with EJB 2.1, which is a boon. You CAN have a "mixed" environment. The most painful integration would have been if you used Entity beans, but you didn't, so that's good. And for all of the EJB naysayers, this backward compatibility that spans across, what, almost 10 years of EJB, is what lets you actually keep a bulk of your code yet still move forward.