Greg: This proxy setup is effective, but not primarily because nginx can serve static files directly. As you have noticed this does little for highly dynamic apps.
The reason is, in this setup apache/modphp can deliver its result almost immediately to nginx, which will take the job of spooling it slowly to a remote client. While nginx is delivering, the apache+php interpreter process can be freed up and made to work on the next request again, thus not holding all its resources during network delivery. This is worth the extra layer for all but the smallest sites, and scales far better than a setup where content creation and delivery is bound to the same process and the same tuning parameters.
Frankie: I suspect you can improve your setup by changing your process limits with these insights.
1. You want only a few apache/php processes, like 2-4, because now these are essentially CPU-bound (or DB-bound which is the same for a sufficiently small DB and typical queries). This will prevent cache trashing, and the front end nginx will serialize access to these backends. Php processes don't have to stay around during nginx's delivery phase, and you don't want more than neccessary. On a multi-core machine around your number of cores, but minimally 2-4.
2. You want as many nginx processes as you can get to saturate bandwith and/or RAM. For sure more than 4, and much more than your number of apache processes. You have to experiment here. With only 4 I suspect that with big traffic spikes some of your users could get server not available errors.
You can achieve the same effect by using any other 2-tier setup, like apache (without modphp) + fast-cgi, or even 2 apache instances one of them acting as light reverse proxy the other as php engine. I reccomend fast-cgi because it is quite simply, highly scaleable, and best of all, provides extra security by allowing to run fast-cgi processes as a user different from apache or www-data easily, all without having to learn and manage another web server.