We got it. Had to open the Nginx logs and noticed too many "posts" in the error log. Dug in and it was three ranges overseas all hitting with a "post timeout attack." It was a light DDoS where sessions were being opened and held causing nginx to wait on a timeout. This caused Apache to just increment forever. Once we blocked those ranges, the Apache thread count started to drop for the first time, and memory started to release. And the continuous flood of nginx error logs ceased.

If you are looking at nginx error logs, this is what you look for: upstream timed out (110: Connection timed out) while reading response header from upstream, client:

You can use this command to collect the offending IP addresses:

grep "upstream timed out" error.log | cut -d' ' -f20

Then use your firewall to shut them down. We are all good now! Woot.