Preimage resistance is obviously one the most important properties of a good hash function, but there are other points to consider as well. MD5, SHA-1, and SHA-2 are all susceptible to length extension attacks and so cannot be safely used as message authentication codes. MD5 is prone to bit-flipping attacks. Few (including the venerable SHA-3) produce remarkably high entropy density output (which ultimately opens the door to more efficient cryptanalysis). Most are based on what amounts to "complex shuffling routines" (to broadly simplify things, of course), a fact which severely complicates the ability to *prove* the assumption of security of the system. Finally, ALL of the algorithms are generally designed in a very rigid, non-scalable manner; complex initialization constants seem to be the rule, meaning that a change from say 256-bit to 512-bit keys can be problematic and hazard-prone at best.
In contrast, what I am proposing instead focuses on a very fundamental number theory question, one which is many orders of magnitude easier to prove than such "smoke and mirror" techniques (the well-understood Diffie-Hellman problem is a good example of what I am striving for here) and as a result should be much more resilient to the types of vulnerabilities seen today (of course, whether or not it is prone to some other kind of attack remains to be seen). Incidentally, the randomness produced by the algorithm which provides such a high degree of collision resistance isn't even forced by the design - it just arises naturally from the normal course of operation. Last but not least, the whole scheme is easily scalable (so easy that one just has to modify the size parameter of a function call to produce a hash of arbitrary width).
So yes, I do feel quite justified in saying that this is the right problem to be working on.