In chess, for a naive implementation, the branching factor is about 30 on average, which means, assuming 2Mnps (million nodes per second), which is about right for programs running on a modern PC, a program can search about 5 plies (log(20,000,000)/log(30)) in 10 seconds (or 10 plies maximum using alpha-beta with perfect move ordering, realistically 6-7 plies). That is obviously no good. Human masters can often look much further ahead than that.
If only a 20% speed improvement meant that you can search 20% deeper. Of course, it doesn't...
For years I've hacked on programs which play the game of Amazons. This game has an astounding branching factor of 2176 on the first move. Even assuming perfect alpha-beta, that's a tree growth factor per half-ply of 47 in the opening game.
What this means, depressingly, is that to search only a single ply deeper (remembering that a ply is a move followed by a counter move) in a given time, I must speed up my program by 2176 times.
Counterintuitively, it is better to instead make your evaluation heuristic more "intelligent" even if it makes it slower. You aren't going to lose much (maybe half a ply) but your estimates are more robust and so you can perform better at those shallower search depths. Or to think of it another way, even if I made my evaluation function 2176 times more complicated, I'd only lose a single ply.
Chess is somewhat different in that the branching factor is MUCH lower, and it can vary significantly from ply to ply. A 20% speed increase is nothing to sneeze at, but I have to wonder how this has actually affected your game play performance.