First off, I wouldn't touch MAMP (or LAMP, XAMP, etc) with a barge pole if it were me. It's a personal preference, but hey. I run my entire Mac-based dev stack on MacOS using the stock Apache2 server, and with php5, postgres, mysql and other components out of Homebrew. I have at least six installs at any one time, plus all of the other things I'm doing (integration + stable for master, 25, and 24). Using homebrew means I can run multiple versions of php simultaneously (against different VirtualHosts in apache for example). I'm also using postgres from the petere/postgresql tap to give me the option of different versions of postgres too.
I have none of the MUC caching enabled on my stock installs, though I do use memcached out of homebrew for testing MUC. I've not tuned APC, Opcode, or xcache at present (though I should really fix that actually). I'm not actually sure any of them are even installed come to think about it (should also fix that).
As I mentioned on Twitter, Moodle has grown a lot since version 1.9. Things like course completion, grouping features, context-specific permission overrides, capabilities, hidden activities conditional activities, etc don't come free and can require a fair amount of processing either in php, or in the database. Sometimes it's not practical to remove these entirely if you're not using them because most people are. Even if not always or they don't know it.
A lot of work goes into that processing including at both the peer review, and integration review level. Those database queries are usually optimised as best as possible, but every time you join a table, add a criterion, sort, group, etc. you increase the complexity. They'll still typically be far more efficient than getting the same result with a sub-select for example.
However, they all use memory. As others have said, you can have the world's most powerful computer, but sticking an untuned MAMP in there won't make any use of that power. I don't know what the default configuration is for MySQL on MAMP, but the stock Postgres configuration only offers something like 24MB for shared memory (this isn't just laziness, but an attempt to provide the most widely usable config for all systems - as I recall, Solaris in particular has limitations around the 24MB mark which require you to change system settings and the Postgres config tries to run out of the box without requiring reconfiguration of your system). Clearly, the defaults aren't going to cut it in many cases and you need to do some performance tuning. When there isn't enough allocated memory, most database engines will page the resultset to disk while in order to free up enough memory for the next stage. Try running one of the slow queries against an EXPLAIN ANALYZE to see what it's doing - I'm almost certain that you'll come across some which are doing exactly this. Disk is slow. Even SSD is slow compared to memory.
It's not about putting lipstick on a pig, and no matter how fast a laptop you have, without tuning it it's like having a 4-lane motorway with three of it's lanes closed, and the remaining one just a bit too narrow for that HGV in front of you and full of pot holes. Plus your tyre keeps going flat so you have to get out and pump it up long enough to get to the next junction. Not tuning your database is like buying a Formula 1 car, and putting cheap petrol in it. It may run, but it's not going to be pretty.
I've not done a huge amount of tuning on my system (just Postgres, and I've barely done anything there), but using mdk to install a new moodle instance with 100 students, 5 teachers, 5 editing teachers, 20 courses, and to enrol each user in every course takes 59 seconds on a MacBook Air with an i7 processor. In my opinion, that really isn't too shady. The installation performs well too.
I daresay that if I configure an opcache, I could improve that further.