O_okeep mind that this method is prone to latency, as you're making system calls to get the random numbers
The preferred method would be reading several samples with a single call where you barely see a cost if the pool has sufficient entropy.
That said, unless you are doing something very specific, most related to cryptography, you don't need true randomness.
Using "/dev/random" for a seed is a good idea, but some PRNG have unreasonably long seeds. It would be more reasonable to read a set of samples from "/dev/random", stretch the entropy over a seed with a computationally complex PRNG, and feed the resulting seed into a cryptographically secure PRNG.
Of course, that would be kind of a waste of your time as a programmer; if you are already using "/dev/random", you can probably use "/dev/urandom" which, on a lot of systems, will occasionally read samples from "/dev/random" and stretch the available entropy over a few thousands of samples according to some schedule. Sure, you only get fractions of a bit of entropy for each sample, but that's more than sufficient for anything not intended for long term cryptographic use. (So, you wouldn't use "/dev/urandom" to build an issuing security certificate.)